Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube uses internal network domain name DNS information for setting name server IP. #11644

Open
bzvestey opened this issue Jun 12, 2021 · 9 comments
Labels
area/dns DNS issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@bzvestey
Copy link

When the computer is on a network that has its internal domain name set to a domain that will resolve to an external IP, then minikube will try to use that external IP address for DNS resolution. Note that the network this was tested on does not have any special rules to make that domain name resolve differently internally. The busybox image running in minikubes kubernetes seems fine, and running other containers in the computers local docker run fine. For reference I am using a Unifi USG-Pro gateway for my router.

Steps to reproduce the issue:

  1. Setup your router to use an externally resolvable domain name for the networks base domain name internally.
  2. minikube start
  3. kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 3 google.com
╰─➤ kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 3 google.com If you don't see a command prompt, try pressing enter. 64 bytes from 142.250.69.206: seq=1 ttl=115 time=4.006 ms 64 bytes from 142.250.69.206: seq=2 ttl=115 time=3.725 ms

--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 3.500/3.743/4.006 ms
pod "busybox" deleted

  1. The busybox image seems to fun fine, then trying to run an ubuntu image failed.
  2. kubectl run busybox --image=ubuntu --rm -ti --restart=Never --command -- bash -c "apt-get update && apt-get install -y iputils-ping && ping -c 3 google.com"
╰─➤ kubectl run busybox --image=ubuntu --rm -ti --restart=Never --command -- bash -c "apt-get update && apt-get install -y iputils-ping && ping -c 3 google.com" Ign:1 https://archive.ubuntu.com/ubuntu focal InRelease Ign:2 https://archive.ubuntu.com/ubuntu focal-updates InRelease Ign:4 https://security.ubuntu.com/ubuntu focal-security InRelease Ign:3 https://archive.ubuntu.com/ubuntu focal-backports InRelease Err:5 https://archive.ubuntu.com/ubuntu focal Release Could not handshake: A TLS fatal alert has been received. [IP: 97.113.230.239 443] Err:6 https://archive.ubuntu.com/ubuntu focal-updates Release Could not handshake: A TLS fatal alert has been received. [IP: 97.113.230.239 443] Err:7 https://security.ubuntu.com/ubuntu focal-security Release Could not handshake: A TLS fatal alert has been received. [IP: 97.113.230.239 443] Err:8 https://archive.ubuntu.com/ubuntu focal-backports Release Could not handshake: A TLS fatal alert has been received. [IP: 97.113.230.239 443] Reading package lists... Done W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: No system certificates available. Try installing ca-certificates. W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: No system certificates available. Try installing ca-certificates. W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: No system certificates available. Try installing ca-certificates. W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: No system certificates available. Try installing ca-certificates. W: http://archive.ubuntu.com/ubuntu/dists/focal/Release: No system certificates available. Try installing ca-certificates. E: The repository 'http://archive.ubuntu.com/ubuntu focal Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/Release: No system certificates available. Try installing ca-certificates. W: http://security.ubuntu.com/ubuntu/dists/focal-security/Release: No system certificates available. Try installing ca-certificates. E: The repository 'http://archive.ubuntu.com/ubuntu focal-updates Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. E: The repository 'http://security.ubuntu.com/ubuntu focal-security Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/Release: No system certificates available. Try installing ca-certificates. E: The repository 'http://archive.ubuntu.com/ubuntu focal-backports Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. pod "busybox" deleted pod default/busybox terminated (Error)
  1. The IP is seems to be "assuming" is the correct address is actually my networks external IP.
  2. minikube ssh
  3. cat /etc/resolv.conf
docker@minikube:~$ cat /etc/resolv.conf search minastas.ninja nameserver 97.113.230.239 options ndots:0
  1. Taking a look at the resolv.conf in minikube, it seems to be using my networks external IP for the DNS name server.

Full output of minikube logs command:

* * ==> Audit <== * |------------|---------------------------------------------------------------|----------|----------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|---------------------------------------------------------------|----------|----------|---------|-------------------------------|-------------------------------| | start | | minikube | bzvestey | v1.19.0 | Wed, 02 Jun 2021 17:32:24 PDT | Wed, 02 Jun 2021 17:33:45 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Wed, 02 Jun 2021 18:46:33 PDT | Wed, 02 Jun 2021 18:46:34 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 16:03:21 PDT | Thu, 03 Jun 2021 16:10:47 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:03:38 PDT | Thu, 03 Jun 2021 17:03:39 PDT | | start | --help | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:07:51 PDT | Thu, 03 Jun 2021 17:07:51 PDT | | addons | list | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:13:00 PDT | Thu, 03 Jun 2021 17:13:00 PDT | | help | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:15:43 PDT | Thu, 03 Jun 2021 17:15:43 PDT | | stop | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:15:49 PDT | Thu, 03 Jun 2021 17:16:01 PDT | | delete | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:16:02 PDT | Thu, 03 Jun 2021 17:16:04 PDT | | start | --mount=true | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:18:43 PDT | Thu, 03 Jun 2021 17:19:28 PDT | | | --mount-string=/home/bzvestey/dev/media_tracker:/mediatracker | | | | | | | ssh | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:19:37 PDT | Thu, 03 Jun 2021 17:20:23 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:20:53 PDT | Thu, 03 Jun 2021 17:20:53 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:21:08 PDT | Thu, 03 Jun 2021 17:21:09 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:21:27 PDT | Thu, 03 Jun 2021 17:21:43 PDT | | addons | list | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:25:38 PDT | Thu, 03 Jun 2021 17:25:38 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:27:02 PDT | Thu, 03 Jun 2021 17:28:59 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Thu, 03 Jun 2021 17:51:38 PDT | Fri, 04 Jun 2021 14:55:23 PDT | | start | --help | minikube | bzvestey | v1.19.0 | Fri, 04 Jun 2021 14:55:31 PDT | Fri, 04 Jun 2021 14:55:31 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Fri, 04 Jun 2021 15:10:50 PDT | Sat, 05 Jun 2021 08:45:26 PDT | | stop | | minikube | bzvestey | v1.19.0 | Sat, 05 Jun 2021 08:45:30 PDT | Sat, 05 Jun 2021 08:45:42 PDT | | delete | | minikube | bzvestey | v1.19.0 | Sat, 05 Jun 2021 08:45:49 PDT | Sat, 05 Jun 2021 08:45:51 PDT | | start | --mount=true | minikube | bzvestey | v1.19.0 | Sat, 05 Jun 2021 08:46:04 PDT | Sat, 05 Jun 2021 08:46:48 PDT | | | --mount-string=/home/bzvestey/dev/media_tracker:/mediatracker | | | | | | | docker-env | | minikube | bzvestey | v1.19.0 | Sat, 05 Jun 2021 08:47:26 PDT | Sat, 05 Jun 2021 08:47:26 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Sat, 05 Jun 2021 08:47:30 PDT | Sat, 05 Jun 2021 08:47:31 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Sat, 05 Jun 2021 08:48:57 PDT | Mon, 07 Jun 2021 18:31:57 PDT | | delete | | minikube | bzvestey | v1.19.0 | Mon, 07 Jun 2021 18:32:06 PDT | Mon, 07 Jun 2021 18:32:09 PDT | | start | | minikube | bzvestey | v1.19.0 | Mon, 07 Jun 2021 18:33:57 PDT | Mon, 07 Jun 2021 18:34:43 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Mon, 07 Jun 2021 18:43:37 PDT | Mon, 07 Jun 2021 18:43:38 PDT | | docker-env | | minikube | bzvestey | v1.19.0 | Mon, 07 Jun 2021 18:44:32 PDT | Mon, 07 Jun 2021 18:44:33 PDT | | ssh | | minikube | bzvestey | v1.19.0 | Mon, 07 Jun 2021 18:34:53 PDT | Mon, 07 Jun 2021 18:46:43 PDT | | delete | | minikube | bzvestey | v1.19.0 | Mon, 07 Jun 2021 18:46:48 PDT | Mon, 07 Jun 2021 18:46:52 PDT | | start | --mount=true | minikube | bzvestey | v1.20.0 | Fri, 11 Jun 2021 19:06:29 PDT | Fri, 11 Jun 2021 19:08:40 PDT | | | --mount-string=/home/bzvestey/dev/media_tracker:/mediatracker | | | | | | | addons | | minikube | bzvestey | v1.20.0 | Fri, 11 Jun 2021 19:17:29 PDT | Fri, 11 Jun 2021 19:17:29 PDT | | addons | list | minikube | bzvestey | v1.20.0 | Fri, 11 Jun 2021 19:17:34 PDT | Fri, 11 Jun 2021 19:17:34 PDT | | docker-env | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 14:16:22 PDT | Sat, 12 Jun 2021 14:16:23 PDT | | docker-env | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 14:16:34 PDT | Sat, 12 Jun 2021 14:16:35 PDT | | ssh | | minikube | bzvestey | v1.20.0 | Fri, 11 Jun 2021 19:09:28 PDT | Sat, 12 Jun 2021 14:19:50 PDT | | delete | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 14:34:46 PDT | Sat, 12 Jun 2021 14:34:50 PDT | | start | --mount=true | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:02:57 PDT | Sat, 12 Jun 2021 15:03:44 PDT | | | --mount-string=/home/bzvestey/dev/media_tracker:/mediatracker | | | | | | | logs | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:03:50 PDT | Sat, 12 Jun 2021 15:03:52 PDT | | logs | help | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:05:06 PDT | Sat, 12 Jun 2021 15:05:07 PDT | | help | logs | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:05:12 PDT | Sat, 12 Jun 2021 15:05:12 PDT | | logs | --problems | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:05:49 PDT | Sat, 12 Jun 2021 15:05:50 PDT | | logs | --problems=false | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:05:57 PDT | Sat, 12 Jun 2021 15:05:58 PDT | | logs | --problems=true | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:06:01 PDT | Sat, 12 Jun 2021 15:06:02 PDT | | delete | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:11:03 PDT | Sat, 12 Jun 2021 15:11:06 PDT | | start | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:11:25 PDT | Sat, 12 Jun 2021 15:12:11 PDT | | ssh | | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:20:52 PDT | Sat, 12 Jun 2021 15:23:11 PDT | | help | logs | minikube | bzvestey | v1.20.0 | Sat, 12 Jun 2021 15:27:51 PDT | Sat, 12 Jun 2021 15:27:51 PDT | |------------|---------------------------------------------------------------|----------|----------|---------|-------------------------------|-------------------------------|
  • ==> Last Start <==
  • Log file created at: 2021/06/12 15:11:25
    Running on machine: bzvestey-worktop
    Binary: Built with gc go1.16.4 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0612 15:11:25.031598 68916 out.go:291] Setting OutFile to fd 1 ...
    I0612 15:11:25.031795 68916 out.go:343] isatty.IsTerminal(1) = true
    I0612 15:11:25.031798 68916 out.go:304] Setting ErrFile to fd 2...
    I0612 15:11:25.031802 68916 out.go:343] isatty.IsTerminal(2) = true
    I0612 15:11:25.031907 68916 root.go:316] Updating PATH: /home/bzvestey/.minikube/bin
    I0612 15:11:25.032141 68916 out.go:298] Setting JSON to false
    I0612 15:11:25.049872 68916 start.go:108] hostinfo: {"hostname":"bzvestey-worktop","uptime":72467,"bootTime":1623463418,"procs":579,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"21.0.6","kernelVersion":"5.10.41-1-MANJARO","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"befe4676-f5e8-4a80-b53f-d1cc4840b3fd"}
    I0612 15:11:25.049957 68916 start.go:118] virtualization: kvm host
    I0612 15:11:25.060056 68916 out.go:170] 😄 minikube v1.20.0 on Arch 21.0.6
    I0612 15:11:25.060327 68916 driver.go:322] Setting default libvirt URI to qemu:///system
    I0612 15:11:25.060358 68916 global.go:103] Querying for installed drivers using PATH=/home/bzvestey/.minikube/bin:/home/bzvestey/.local/bin:/home/bzvestey/.local/bin:/usr/local/bin:/usr/bin:/var/lib/snapd/snap/bin:/usr/local/sbin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/bzvestey/dev/go/bin:/home/bzvestey/bin:/home/bzvestey/dev/go/bin:/home/bzvestey/bin
    I0612 15:11:25.060382 68916 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0612 15:11:25.060483 68916 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
    I0612 15:11:25.060523 68916 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
    I0612 15:11:25.096616 68916 docker.go:119] docker version: linux-20.10.6
    I0612 15:11:25.096699 68916 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0612 15:11:25.180352 68916 info.go:261] docker info: {ID:3MBP:4OW5:PVTW:COLX:IJJN:3WSA:SV74:3SNL:7SVG:E72B:5PO2:I36N Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:88 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2021-06-12 15:11:25.126214101 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.10.41-1-MANJARO OperatingSystem:Manjaro Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16644579328 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bzvestey-worktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:36cc874494a56a253cd181a1a685b44b58a2e34a.m Expected:36cc874494a56a253cd181a1a685b44b58a2e34a.m} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}}
    I0612 15:11:25.180428 68916 docker.go:225] overlay module found
    I0612 15:11:25.180434 68916 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0612 15:11:25.180483 68916 global.go:111] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
    I0612 15:11:25.187293 68916 global.go:111] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:}
    I0612 15:11:25.187342 68916 global.go:111] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
    I0612 15:11:25.187356 68916 driver.go:258] not recommending "ssh" due to default: false
    I0612 15:11:25.187366 68916 driver.go:292] Picked: docker
    I0612 15:11:25.187371 68916 driver.go:293] Alternatives: [ssh]
    I0612 15:11:25.187374 68916 driver.go:294] Rejects: [virtualbox vmware kvm2 none podman]
    I0612 15:11:25.196078 68916 out.go:170] ✨ Automatically selected the docker driver
    I0612 15:11:25.196113 68916 start.go:276] selected driver: docker
    I0612 15:11:25.196123 68916 start.go:718] validating driver "docker" against
    I0612 15:11:25.196144 68916 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0612 15:11:25.196245 68916 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0612 15:11:25.278852 68916 info.go:261] docker info: {ID:3MBP:4OW5:PVTW:COLX:IJJN:3WSA:SV74:3SNL:7SVG:E72B:5PO2:I36N Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:88 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2021-06-12 15:11:25.222011444 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.10.41-1-MANJARO OperatingSystem:Manjaro Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16644579328 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bzvestey-worktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:36cc874494a56a253cd181a1a685b44b58a2e34a.m Expected:36cc874494a56a253cd181a1a685b44b58a2e34a.m} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}}
    I0612 15:11:25.278941 68916 start_flags.go:259] no existing cluster config was found, will generate one from the flags
    I0612 15:11:25.279811 68916 start_flags.go:314] Using suggested 3900MB memory alloc based on sys=15873MB, container=15873MB
    I0612 15:11:25.279934 68916 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true]
    I0612 15:11:25.279943 68916 cni.go:93] Creating CNI manager for ""
    I0612 15:11:25.279952 68916 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
    I0612 15:11:25.279964 68916 start_flags.go:273] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
    I0612 15:11:25.288574 68916 out.go:170] 👍 Starting control plane node minikube in cluster minikube
    I0612 15:11:25.288654 68916 cache.go:111] Beginning downloading kic base image for docker with docker
    W0612 15:11:25.288670 68916 out.go:424] no arguments passed for "🚜 Pulling base image ...\n" - returning raw string
    W0612 15:11:25.288712 68916 out.go:424] no arguments passed for "🚜 Pulling base image ...\n" - returning raw string
    I0612 15:11:25.297286 68916 out.go:170] 🚜 Pulling base image ...
    I0612 15:11:25.297362 68916 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
    I0612 15:11:25.297482 68916 preload.go:106] Found local preload: /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
    I0612 15:11:25.297500 68916 cache.go:54] Caching tarball of preloaded images
    I0612 15:11:25.297568 68916 preload.go:132] Found /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
    I0612 15:11:25.297558 68916 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
    I0612 15:11:25.297589 68916 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
    I0612 15:11:25.297614 68916 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
    I0612 15:11:25.297642 68916 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
    I0612 15:11:25.297781 68916 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
    I0612 15:11:25.298731 68916 profile.go:148] Saving config to /home/bzvestey/.minikube/profiles/minikube/config.json ...
    I0612 15:11:25.298790 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/config.json: {Name:mkda262e918d18c3c99523599978ad8dd65663d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0612 15:11:25.380892 68916 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull
    I0612 15:11:25.380902 68916 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull
    I0612 15:11:25.380911 68916 cache.go:194] Successfully downloaded all kic artifacts
    I0612 15:11:25.380932 68916 start.go:313] acquiring machines lock for minikube: {Name:mk8ddead9fb15180016283278991bd9deb8e0cbc Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0612 15:11:25.380997 68916 start.go:317] acquired machines lock for "minikube" in 51.943µs
    I0612 15:11:25.381017 68916 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
    I0612 15:11:25.381086 68916 start.go:126] createHost starting for "" (driver="docker")
    I0612 15:11:25.389962 68916 out.go:197] 🔥 Creating docker container (CPUs=2, Memory=3900MB) ...
    I0612 15:11:25.390272 68916 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0612 15:11:25.390293 68916 client.go:168] LocalClient.Create starting
    I0612 15:11:25.390362 68916 main.go:128] libmachine: Reading certificate data from /home/bzvestey/.minikube/certs/ca.pem
    I0612 15:11:25.390389 68916 main.go:128] libmachine: Decoding PEM data...
    I0612 15:11:25.390412 68916 main.go:128] libmachine: Parsing certificate...
    I0612 15:11:25.390548 68916 main.go:128] libmachine: Reading certificate data from /home/bzvestey/.minikube/certs/cert.pem
    I0612 15:11:25.390567 68916 main.go:128] libmachine: Decoding PEM data...
    I0612 15:11:25.390578 68916 main.go:128] libmachine: Parsing certificate...
    I0612 15:11:25.390914 68916 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
    W0612 15:11:25.420254 68916 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
    I0612 15:11:25.420295 68916 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
    I0612 15:11:25.420307 68916 cli_runner.go:115] Run: docker network inspect minikube
    W0612 15:11:25.453240 68916 cli_runner.go:162] docker network inspect minikube returned with exit code 1
    I0612 15:11:25.453254 68916 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
    stdout:
    []

stderr:
Error: No such network: minikube
I0612 15:11:25.453261 68916 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I0612 15:11:25.453295 68916 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0612 15:11:25.482692 68916 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007e2010] misses:0}
I0612 15:11:25.482732 68916 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0612 15:11:25.482750 68916 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0612 15:11:25.482793 68916 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0612 15:11:25.566415 68916 network_create.go:84] docker network minikube 192.168.49.0/24 created
I0612 15:11:25.566434 68916 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0612 15:11:25.566482 68916 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0612 15:11:25.596804 68916 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0612 15:11:25.637179 68916 oci.go:102] Successfully created a docker volume minikube
I0612 15:11:25.637227 68916 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
I0612 15:11:26.712018 68916 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: (1.074704411s)
I0612 15:11:26.712059 68916 oci.go:106] Successfully prepared a docker volume minikube
W0612 15:11:26.712127 68916 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0612 15:11:26.712146 68916 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0612 15:11:26.712231 68916 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0612 15:11:26.712245 68916 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0612 15:11:26.712316 68916 preload.go:106] Found local preload: /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0612 15:11:26.712362 68916 kic.go:179] Starting extracting preloaded images to volume ...
I0612 15:11:26.712542 68916 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
I0612 15:11:26.840896 68916 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
I0612 15:11:27.588324 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0612 15:11:27.622016 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:11:27.664558 68916 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0612 15:11:27.734680 68916 oci.go:278] the created container "minikube" has a running status.
I0612 15:11:27.734730 68916 kic.go:210] Creating ssh key for kic: /home/bzvestey/.minikube/machines/minikube/id_rsa...
I0612 15:11:27.878454 68916 kic_runner.go:188] docker (temp): /home/bzvestey/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0612 15:11:27.961817 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:11:28.000333 68916 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0612 15:11:28.000344 68916 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0612 15:11:30.459373 68916 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (3.746740435s)
I0612 15:11:30.459401 68916 kic.go:188] duration metric: took 3.747038 seconds to extract preloaded images to volume
I0612 15:11:30.459551 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:11:30.504263 68916 machine.go:88] provisioning docker machine ...
I0612 15:11:30.504283 68916 ubuntu.go:169] provisioning hostname "minikube"
I0612 15:11:30.504324 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:30.533600 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:30.533773 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:30.533782 68916 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0612 15:11:30.713248 68916 main.go:128] libmachine: SSH cmd err, output: : minikube

I0612 15:11:30.713380 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:30.772346 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:30.772524 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:30.772542 68916 main.go:128] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0612 15:11:30.934710 68916 main.go:128] libmachine: SSH cmd err, output: :
I0612 15:11:30.934744 68916 ubuntu.go:175] set auth options {CertDir:/home/bzvestey/.minikube CaCertPath:/home/bzvestey/.minikube/certs/ca.pem CaPrivateKeyPath:/home/bzvestey/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/bzvestey/.minikube/machines/server.pem ServerKeyPath:/home/bzvestey/.minikube/machines/server-key.pem ClientKeyPath:/home/bzvestey/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/bzvestey/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/bzvestey/.minikube}
I0612 15:11:30.934774 68916 ubuntu.go:177] setting up certificates
I0612 15:11:30.934788 68916 provision.go:83] configureAuth start
I0612 15:11:30.934883 68916 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0612 15:11:30.988309 68916 provision.go:137] copyHostCerts
I0612 15:11:30.988349 68916 exec_runner.go:145] found /home/bzvestey/.minikube/cert.pem, removing ...
I0612 15:11:30.988355 68916 exec_runner.go:190] rm: /home/bzvestey/.minikube/cert.pem
I0612 15:11:30.988405 68916 exec_runner.go:152] cp: /home/bzvestey/.minikube/certs/cert.pem --> /home/bzvestey/.minikube/cert.pem (1127 bytes)
I0612 15:11:30.988488 68916 exec_runner.go:145] found /home/bzvestey/.minikube/key.pem, removing ...
I0612 15:11:30.988493 68916 exec_runner.go:190] rm: /home/bzvestey/.minikube/key.pem
I0612 15:11:30.988521 68916 exec_runner.go:152] cp: /home/bzvestey/.minikube/certs/key.pem --> /home/bzvestey/.minikube/key.pem (1679 bytes)
I0612 15:11:30.988570 68916 exec_runner.go:145] found /home/bzvestey/.minikube/ca.pem, removing ...
I0612 15:11:30.988575 68916 exec_runner.go:190] rm: /home/bzvestey/.minikube/ca.pem
I0612 15:11:30.988601 68916 exec_runner.go:152] cp: /home/bzvestey/.minikube/certs/ca.pem --> /home/bzvestey/.minikube/ca.pem (1082 bytes)
I0612 15:11:30.988640 68916 provision.go:111] generating server cert: /home/bzvestey/.minikube/machines/server.pem ca-key=/home/bzvestey/.minikube/certs/ca.pem private-key=/home/bzvestey/.minikube/certs/ca-key.pem org=bzvestey.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0612 15:11:31.263214 68916 provision.go:165] copyRemoteCerts
I0612 15:11:31.263278 68916 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0612 15:11:31.263305 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:31.291999 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:31.397043 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0612 15:11:31.448519 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
I0612 15:11:31.503746 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0612 15:11:31.555844 68916 provision.go:86] duration metric: configureAuth took 621.036171ms
I0612 15:11:31.555875 68916 ubuntu.go:193] setting minikube options for container-runtime
I0612 15:11:31.556329 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:31.612130 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:31.612325 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:31.612337 68916 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0612 15:11:31.785767 68916 main.go:128] libmachine: SSH cmd err, output: : overlay

I0612 15:11:31.785801 68916 ubuntu.go:71] root file system type: overlay
I0612 15:11:31.786293 68916 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0612 15:11:31.786404 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:31.839282 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:31.839423 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:31.839494 68916 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0612 15:11:32.029605 68916 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0612 15:11:32.029733 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:32.084496 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:32.084634 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:32.084648 68916 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0612 15:11:33.272377 68916 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-04-09 22:45:28.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-12 22:11:32.021504825 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0612 15:11:33.272409 68916 machine.go:91] provisioned docker machine in 2.768133143s
I0612 15:11:33.272425 68916 client.go:171] LocalClient.Create took 7.882125864s
I0612 15:11:33.272488 68916 start.go:168] duration metric: libmachine.API.Create for "minikube" took 7.882185568s
I0612 15:11:33.272505 68916 start.go:267] post-start starting for "minikube" (driver="docker")
I0612 15:11:33.272515 68916 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0612 15:11:33.272651 68916 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0612 15:11:33.272742 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.316879 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.425880 68916 ssh_runner.go:149] Run: cat /etc/os-release
I0612 15:11:33.434987 68916 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0612 15:11:33.435025 68916 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0612 15:11:33.435051 68916 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0612 15:11:33.435062 68916 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0612 15:11:33.435089 68916 filesync.go:118] Scanning /home/bzvestey/.minikube/addons for local assets ...
I0612 15:11:33.435192 68916 filesync.go:118] Scanning /home/bzvestey/.minikube/files for local assets ...
I0612 15:11:33.435243 68916 start.go:270] post-start completed in 162.728095ms
I0612 15:11:33.435983 68916 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0612 15:11:33.489187 68916 profile.go:148] Saving config to /home/bzvestey/.minikube/profiles/minikube/config.json ...
I0612 15:11:33.489397 68916 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0612 15:11:33.489424 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.519786 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.614737 68916 start.go:129] duration metric: createHost completed in 8.233634677s
I0612 15:11:33.614772 68916 start.go:80] releasing machines lock for "minikube", held for 8.233761929s
I0612 15:11:33.614991 68916 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0612 15:11:33.684251 68916 ssh_runner.go:149] Run: systemctl --version
I0612 15:11:33.684304 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.684311 68916 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0612 15:11:33.684355 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.746475 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.748329 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.937826 68916 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0612 15:11:33.967308 68916 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0612 15:11:33.994246 68916 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0612 15:11:33.994336 68916 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0612 15:11:34.018387 68916 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0612 15:11:34.050586 68916 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0612 15:11:34.194137 68916 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0612 15:11:34.309795 68916 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0612 15:11:34.322642 68916 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0612 15:11:34.424015 68916 ssh_runner.go:149] Run: sudo systemctl start docker
I0612 15:11:34.435650 68916 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0612 15:11:34.496346 68916 out.go:197] 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
I0612 15:11:34.496431 68916 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0612 15:11:34.528404 68916 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0612 15:11:34.531930 68916 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0612 15:11:34.543783 68916 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0612 15:11:34.543801 68916 preload.go:106] Found local preload: /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0612 15:11:34.543829 68916 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0612 15:11:34.593751 68916 docker.go:528] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0612 15:11:34.593764 68916 docker.go:465] Images already preloaded, skipping extraction
I0612 15:11:34.593813 68916 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0612 15:11:34.656260 68916 docker.go:528] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0612 15:11:34.656284 68916 cache_images.go:74] Images are preloaded, skipping loading
I0612 15:11:34.656356 68916 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0612 15:11:34.747588 68916 cni.go:93] Creating CNI manager for ""
I0612 15:11:34.747599 68916 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0612 15:11:34.747605 68916 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0612 15:11:34.747615 68916 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0612 15:11:34.747722 68916 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 192.168.49.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0612 15:11:34.747795 68916 kubeadm.go:901] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0612 15:11:34.747843 68916 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0612 15:11:34.756138 68916 binaries.go:44] Found k8s binaries, skipping transfer
I0612 15:11:34.756183 68916 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0612 15:11:34.763875 68916 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0612 15:11:34.780067 68916 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0612 15:11:34.798181 68916 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes)
I0612 15:11:34.819804 68916 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0612 15:11:34.823895 68916 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0612 15:11:34.846012 68916 certs.go:52] Setting up /home/bzvestey/.minikube/profiles/minikube for IP: 192.168.49.2
I0612 15:11:34.846101 68916 certs.go:171] skipping minikubeCA CA generation: /home/bzvestey/.minikube/ca.key
I0612 15:11:34.846125 68916 certs.go:171] skipping proxyClientCA CA generation: /home/bzvestey/.minikube/proxy-client-ca.key
I0612 15:11:34.846186 68916 certs.go:286] generating minikube-user signed cert: /home/bzvestey/.minikube/profiles/minikube/client.key
I0612 15:11:34.846193 68916 crypto.go:69] Generating cert /home/bzvestey/.minikube/profiles/minikube/client.crt with IP's: []
I0612 15:11:34.998583 68916 crypto.go:157] Writing cert to /home/bzvestey/.minikube/profiles/minikube/client.crt ...
I0612 15:11:34.998596 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/client.crt: {Name:mk09ad8dc7b454626ff8a93652ee10868c89d096 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:34.998766 68916 crypto.go:165] Writing key to /home/bzvestey/.minikube/profiles/minikube/client.key ...
I0612 15:11:34.998772 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/client.key: {Name:mkb5449e63ecac2631fb2a0437febd315c2aaa4e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:34.998849 68916 certs.go:286] generating minikube signed cert: /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0612 15:11:34.998852 68916 crypto.go:69] Generating cert /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0612 15:11:35.187822 68916 crypto.go:157] Writing cert to /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0612 15:11:35.187832 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkd649bed18953a07110a5071f46f1480a29cedb Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.187993 68916 crypto.go:165] Writing key to /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0612 15:11:35.187998 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk5dd328d5476bf757a3370982c65b99c56751a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.188122 68916 certs.go:297] copying /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/bzvestey/.minikube/profiles/minikube/apiserver.crt
I0612 15:11:35.188177 68916 certs.go:301] copying /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/bzvestey/.minikube/profiles/minikube/apiserver.key
I0612 15:11:35.188210 68916 certs.go:286] generating aggregator signed cert: /home/bzvestey/.minikube/profiles/minikube/proxy-client.key
I0612 15:11:35.188213 68916 crypto.go:69] Generating cert /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0612 15:11:35.422805 68916 crypto.go:157] Writing cert to /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt ...
I0612 15:11:35.422814 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt: {Name:mkb512ef5b4b006e2f19bdc48cef45c20da3f0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.422990 68916 crypto.go:165] Writing key to /home/bzvestey/.minikube/profiles/minikube/proxy-client.key ...
I0612 15:11:35.423008 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/proxy-client.key: {Name:mk1b49955018fa31fdc904dff058bce856d17143 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.423163 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/ca-key.pem (1675 bytes)
I0612 15:11:35.423187 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/ca.pem (1082 bytes)
I0612 15:11:35.423204 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/cert.pem (1127 bytes)
I0612 15:11:35.423220 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/key.pem (1679 bytes)
I0612 15:11:35.424246 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0612 15:11:35.444245 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0612 15:11:35.465714 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0612 15:11:35.487008 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0612 15:11:35.508099 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0612 15:11:35.531259 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0612 15:11:35.551114 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0612 15:11:35.570102 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0612 15:11:35.588711 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0612 15:11:35.606447 68916 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0612 15:11:35.619455 68916 ssh_runner.go:149] Run: openssl version
I0612 15:11:35.624287 68916 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0612 15:11:35.632066 68916 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0612 15:11:35.635132 68916 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Jun 3 00:33 /usr/share/ca-certificates/minikubeCA.pem
I0612 15:11:35.635167 68916 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0612 15:11:35.639783 68916 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0612 15:11:35.647106 68916 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0612 15:11:35.647190 68916 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0612 15:11:35.682274 68916 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0612 15:11:35.689349 68916 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0612 15:11:35.696599 68916 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0612 15:11:35.696631 68916 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0612 15:11:35.703914 68916 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0612 15:11:35.703940 68916 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
W0612 15:12:02.455916 68916 out.go:424] no arguments passed for " ▪ Generating certificates and keys ..." - returning raw string
W0612 15:12:02.455970 68916 out.go:424] no arguments passed for " ▪ Generating certificates and keys ..." - returning raw string
I0612 15:12:02.468475 68916 out.go:197] ▪ Generating certificates and keys ...
W0612 15:12:02.473631 68916 out.go:424] no arguments passed for " ▪ Booting up control plane ..." - returning raw string
W0612 15:12:02.473675 68916 out.go:424] no arguments passed for " ▪ Booting up control plane ..." - returning raw string
I0612 15:12:02.483008 68916 out.go:197] ▪ Booting up control plane ...
W0612 15:12:02.487599 68916 out.go:424] no arguments passed for " ▪ Configuring RBAC rules ..." - returning raw string
W0612 15:12:02.487645 68916 out.go:424] no arguments passed for " ▪ Configuring RBAC rules ..." - returning raw string
I0612 15:12:02.496465 68916 out.go:197] ▪ Configuring RBAC rules ...
I0612 15:12:02.504504 68916 cni.go:93] Creating CNI manager for ""
I0612 15:12:02.504532 68916 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0612 15:12:02.504585 68916 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0612 15:12:02.504739 68916 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0612 15:12:02.504785 68916 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_12T15_12_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0612 15:12:04.335214 68916 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.83044449s)
I0612 15:12:04.335257 68916 kubeadm.go:977] duration metric: took 1.830678638s to wait for elevateKubeSystemPrivileges.
I0612 15:12:04.335261 68916 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_12T15_12_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.830456266s)
I0612 15:12:04.335306 68916 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.830706569s)
I0612 15:12:04.335318 68916 ops.go:34] apiserver oom_adj: -16
I0612 15:12:04.335325 68916 kubeadm.go:383] StartCluster complete in 28.688226429s
I0612 15:12:04.335341 68916 settings.go:142] acquiring lock: {Name:mk265c9bb5ded81493ce88fec9fb7405f670feba Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:12:04.335465 68916 settings.go:150] Updating kubeconfig: /home/bzvestey/.kube/config
I0612 15:12:04.336859 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.kube/config: {Name:mk26dfde4f0ec489c8c85de45feb5ce9112d14e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:12:04.860362 68916 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0612 15:12:04.860430 68916 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
W0612 15:12:04.860475 68916 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string
W0612 15:12:04.860501 68916 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string
I0612 15:12:04.869175 68916 out.go:170] 🔎 Verifying Kubernetes components...
I0612 15:12:04.860575 68916 addons.go:328] enableAddons start: toEnable=map[], additional=[]
I0612 15:12:04.869369 68916 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0612 15:12:04.869380 68916 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0612 15:12:04.869408 68916 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0612 15:12:04.869415 68916 addons.go:131] Setting addon storage-provisioner=true in "minikube"
I0612 15:12:04.869415 68916 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0612 15:12:04.869433 68916 addons.go:140] addon storage-provisioner should already be in state true
I0612 15:12:04.869466 68916 host.go:66] Checking if "minikube" exists ...
I0612 15:12:04.870547 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:12:04.871124 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:12:04.904216 68916 api_server.go:50] waiting for apiserver process to appear ...
I0612 15:12:04.904258 68916 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0612 15:12:04.931993 68916 api_server.go:70] duration metric: took 71.522311ms to wait for apiserver process to appear ...
I0612 15:12:04.932007 68916 api_server.go:86] waiting for apiserver healthz status ...
I0612 15:12:04.932014 68916 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0612 15:12:04.948137 68916 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0612 15:12:04.948743 68916 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0612 15:12:04.948751 68916 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0612 15:12:04.948822 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:12:04.948925 68916 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
ok
I0612 15:12:04.949708 68916 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0612 15:12:04.949714 68916 addons.go:140] addon default-storageclass should already be in state true
I0612 15:12:04.949724 68916 host.go:66] Checking if "minikube" exists ...
I0612 15:12:04.949728 68916 api_server.go:139] control plane version: v1.20.2
I0612 15:12:04.949737 68916 api_server.go:129] duration metric: took 17.726309ms to wait for apiserver health ...
I0612 15:12:04.949742 68916 system_pods.go:43] waiting for kube-system pods to appear ...
I0612 15:12:04.950085 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:12:04.957923 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:04.957936 68916 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0612 15:12:04.990664 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:12:04.994286 68916 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
I0612 15:12:04.994301 68916 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0612 15:12:04.994365 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:12:05.028146 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:12:05.118867 68916 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0612 15:12:05.150373 68916 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0612 15:12:05.227483 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:05.227515 68916 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up
I0612 15:12:05.619307 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:05.619343 68916 retry.go:31] will retry after 422.765636ms: only 0 pod(s) have shown up
I0612 15:12:06.048611 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:06.048637 68916 retry.go:31] will retry after 473.074753ms: only 0 pod(s) have shown up
I0612 15:12:06.397680 68916 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.278746897s)
I0612 15:12:06.397796 68916 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.247390297s)
I0612 15:12:06.406622 68916 out.go:170] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0612 15:12:06.406670 68916 addons.go:330] enableAddons completed in 1.546127869s
I0612 15:12:06.528110 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:06.528193 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:06.528217 68916 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0612 15:12:07.123029 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:07.123063 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:07.123079 68916 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0612 15:12:07.963903 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:07.963940 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:07.963967 68916 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0612 15:12:08.718191 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:08.718226 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:08.718244 68916 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
I0612 15:12:09.713129 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:09.713163 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:09.713187 68916 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up
I0612 15:12:10.912032 68916 system_pods.go:59] 5 kube-system pods found
I0612 15:12:10.912060 68916 system_pods.go:61] "etcd-minikube" [c569ef90-8a53-4b5a-b420-1dca7f948c07] Pending
I0612 15:12:10.912078 68916 system_pods.go:61] "kube-apiserver-minikube" [86c613f9-b736-4754-b5d4-62bf399d4fdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0612 15:12:10.912091 68916 system_pods.go:61] "kube-controller-manager-minikube" [760628be-1be1-471c-b837-2176d72ca5f4] Pending
I0612 15:12:10.912104 68916 system_pods.go:61] "kube-scheduler-minikube" [efd4e47f-d618-4dfc-90f3-2c5bcc711af1] Pending
I0612 15:12:10.912116 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:10.912128 68916 system_pods.go:74] duration metric: took 5.962377984s to wait for pod list to return data ...
I0612 15:12:10.912143 68916 kubeadm.go:538] duration metric: took 6.051672951s to wait for : map[apiserver:true system_pods:true] ...
I0612 15:12:10.912167 68916 node_conditions.go:102] verifying NodePressure condition ...
I0612 15:12:10.920945 68916 node_conditions.go:122] node storage ephemeral capacity is 490690488Ki
I0612 15:12:10.920982 68916 node_conditions.go:123] node cpu capacity is 8
I0612 15:12:10.921005 68916 node_conditions.go:105] duration metric: took 8.829438ms to run NodePressure ...
I0612 15:12:10.921026 68916 start.go:206] waiting for startup goroutines ...
I0612 15:12:10.997874 68916 start.go:460] kubectl: 1.21.0, cluster: 1.20.2 (minor skew: 1)
I0612 15:12:11.006523 68916 out.go:170] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

  • ==> Docker <==

  • -- Logs begin at Sat 2021-06-12 22:11:28 UTC, end at Sat 2021-06-12 22:28:08 UTC. --
    Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.445662724Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447616645Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447645873Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447667053Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447678303Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.176250841Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253784511Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253810165Z" level=warning msg="Your kernel does not support cgroup blkio weight"
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253816258Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253975162Z" level=info msg="Loading containers: start."
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.368268545Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.427954821Z" level=info msg="Loading containers: done."
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.800237914Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.800483019Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.800574214Z" level=info msg="Daemon has completed initialization"
    Jun 12 22:11:29 minikube systemd[1]: Started Docker Application Container Engine.
    Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.877087196Z" level=info msg="API listen on /run/docker.sock"
    Jun 12 22:11:32 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
    Jun 12 22:11:32 minikube systemd[1]: Stopping Docker Application Container Engine...
    Jun 12 22:11:32 minikube dockerd[219]: time="2021-06-12T22:11:32.695204119Z" level=info msg="Processing signal 'terminated'"
    Jun 12 22:11:32 minikube dockerd[219]: time="2021-06-12T22:11:32.696410803Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
    Jun 12 22:11:32 minikube dockerd[219]: time="2021-06-12T22:11:32.697163767Z" level=info msg="Daemon shutdown complete"
    Jun 12 22:11:32 minikube systemd[1]: docker.service: Succeeded.
    Jun 12 22:11:32 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jun 12 22:11:32 minikube systemd[1]: Starting Docker Application Container Engine...
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.745871808Z" level=info msg="Starting up"
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747417024Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747436178Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747455669Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747466761Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748270511Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748289830Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748304555Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748313458Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.775311655Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.797835329Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.797892750Z" level=warning msg="Your kernel does not support cgroup blkio weight"
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.797912740Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
    Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.798303911Z" level=info msg="Loading containers: start."
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.049770659Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.166946074Z" level=info msg="Loading containers: done."
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.221511340Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.222184740Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.222319959Z" level=info msg="Daemon has completed initialization"
    Jun 12 22:11:33 minikube systemd[1]: Started Docker Application Container Engine.
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.282613170Z" level=info msg="API listen on [::]:2376"
    Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.291277053Z" level=info msg="API listen on /var/run/docker.sock"
    Jun 12 22:12:23 minikube dockerd[463]: time="2021-06-12T22:12:23.565037143Z" level=info msg="ignoring event" container=2fe52dc4fb65cfe1000206d03d21f32baf21d44fde30a0c132d09941396e5f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:12:23 minikube dockerd[463]: time="2021-06-12T22:12:23.848888850Z" level=info msg="ignoring event" container=e1f6af81dca776cae82b15508a5ffd1b32c9a45cf0c99bb7726778008cbe94de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:12:37 minikube dockerd[463]: time="2021-06-12T22:12:37.335033542Z" level=info msg="ignoring event" container=b975f30646965227d0b4fd6dd04198e224148d7ced8a39ba31d9876265772c57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:12:38 minikube dockerd[463]: time="2021-06-12T22:12:38.016586645Z" level=info msg="ignoring event" container=cb6ffd7fa039beba4cd195332819f5922b36230a1afa952151a23461607b2f6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:14:15 minikube dockerd[463]: time="2021-06-12T22:14:15.016229176Z" level=error msg="stream copy error: reading from a closed fifo"
    Jun 12 22:14:15 minikube dockerd[463]: time="2021-06-12T22:14:15.077327539Z" level=error msg="aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08 cleanup: failed to delete container from containerd: no such container"
    Jun 12 22:14:16 minikube dockerd[463]: time="2021-06-12T22:14:16.179957276Z" level=info msg="ignoring event" container=f5ec9f8c5e0ce8bef3c60c7bc9b7fe46f401abbd102b7a709fbe8c1238d16a67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:14:35 minikube dockerd[463]: time="2021-06-12T22:14:35.185766136Z" level=info msg="ignoring event" container=290ff0f288f7abbc2619938eee4a39b450da1fea246aad744eb416d60e22f58d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:14:35 minikube dockerd[463]: time="2021-06-12T22:14:35.677392385Z" level=info msg="ignoring event" container=a3208906e675de65a3b2114554d454517bf1cc88a2b1245a4d4c973816512628 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:16:40 minikube dockerd[463]: time="2021-06-12T22:16:40.800288239Z" level=info msg="ignoring event" container=42a2fdde8c6686d35ba40bebf98961d2460de1edd3c0775945a2d2b473995d85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:16:41 minikube dockerd[463]: time="2021-06-12T22:16:41.628082058Z" level=info msg="ignoring event" container=c765953fb20df9511007d0b062b29a07793bcbd69ff82f033ceb58fdddc942a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:23:52 minikube dockerd[463]: time="2021-06-12T22:23:52.548294386Z" level=info msg="ignoring event" container=fd5a9796d33a9ae99b585b01a545112d5dc1e856a705b56e8815e484c40c23d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 12 22:23:53 minikube dockerd[463]: time="2021-06-12T22:23:53.221215915Z" level=info msg="ignoring event" container=5097e60d2f5a55cf133bc9f1fb752817b6f8ff8299cdb0763d7650289f06ba5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

  • ==> container status <==

  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
    fc4f50e60692f 6e38f40d628db 15 minutes ago Running storage-provisioner 0 aa751c70d8b3d
    5b73b7e1af54c bfe3a36ebd252 15 minutes ago Running coredns 0 67c0a247db719
    1c6500f10134c 43154ddb57a83 15 minutes ago Running kube-proxy 0 6c9a08edeed0b
    6b43427b7868b a27166429d98e 16 minutes ago Running kube-controller-manager 0 d437733366933
    21f311d01e193 0369cf4303ffd 16 minutes ago Running etcd 0 8484322d0a298
    354e894575b1e ed2c44fbdd78b 16 minutes ago Running kube-scheduler 0 e1e3658d6219b
    7979a48a596d3 a8c2fdb8bf76e 16 minutes ago Running kube-apiserver 0 881d74ca3f80d

  • ==> coredns [5b73b7e1af54] <==

  • .:53
    [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
    CoreDNS-1.7.0
    linux/amd64, go1.14.4, f59c03d

  • ==> describe nodes <==

  • Name: minikube
    Roles: control-plane,master
    Labels: beta.kubernetes.io/arch=amd64
    beta.kubernetes.io/os=linux
    kubernetes.io/arch=amd64
    kubernetes.io/hostname=minikube
    kubernetes.io/os=linux
    minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae-dirty
    minikube.k8s.io/name=minikube
    minikube.k8s.io/updated_at=2021_06_12T15_12_02_0700
    minikube.k8s.io/version=v1.20.0
    node-role.kubernetes.io/control-plane=
    node-role.kubernetes.io/master=
    Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: 0
    volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp: Sat, 12 Jun 2021 22:11:57 +0000
    Taints:
    Unschedulable: false
    Lease:
    HolderIdentity: minikube
    AcquireTime:
    RenewTime: Sat, 12 Jun 2021 22:28:04 +0000
    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message


    MemoryPressure False Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:11:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
    DiskPressure False Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:11:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
    PIDPressure False Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:11:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
    Ready True Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:12:17 +0000 KubeletReady kubelet is posting ready status
    Addresses:
    InternalIP: 192.168.49.2
    Hostname: minikube
    Capacity:
    cpu: 8
    ephemeral-storage: 490690488Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 16254472Ki
    pods: 110
    Allocatable:
    cpu: 8
    ephemeral-storage: 490690488Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 16254472Ki
    pods: 110
    System Info:
    Machine ID: 822f5ed6656e44929f6c2cc5d6881453
    System UUID: 0bb18266-804a-4154-8908-2db3b81dd84f
    Boot ID: 0e9f1284-59c9-49dd-9ab8-7a92e6790ed7
    Kernel Version: 5.10.41-1-MANJARO
    OS Image: Ubuntu 20.04.2 LTS
    Operating System: linux
    Architecture: amd64
    Container Runtime Version: docker://20.10.6
    Kubelet Version: v1.20.2
    Kube-Proxy Version: v1.20.2
    PodCIDR: 10.244.0.0/24
    PodCIDRs: 10.244.0.0/24
    Non-terminated Pods: (7 in total)
    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


    kube-system coredns-74ff55c5b-5xnmn 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 15m
    kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 15m
    kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
    kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
    kube-system kube-proxy-79d2v 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
    kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
    kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    Resource Requests Limits


    cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
    memory 170Mi (1%!)(MISSING) 170Mi (1%!)(MISSING)
    ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    Events:
    Type Reason Age From Message


    Normal NodeHasSufficientMemory 16m (x6 over 16m) kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 16m (x6 over 16m) kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 16m (x5 over 16m) kubelet Node minikube status is now: NodeHasSufficientPID
    Normal Starting 15m kubelet Starting kubelet.
    Normal NodeHasSufficientMemory 15m kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 15m kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 15m kubelet Node minikube status is now: NodeHasSufficientPID
    Normal NodeNotReady 15m kubelet Node minikube status is now: NodeNotReady
    Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit across pods
    Normal NodeReady 15m kubelet Node minikube status is now: NodeReady
    Normal Starting 15m kube-proxy Starting kube-proxy.

  • ==> dmesg <==

[Jun12 02:04] kauditd_printk_skb: 8 callbacks suppressed
[Jun12 02:07] kauditd_printk_skb: 82 callbacks suppressed
[Jun12 02:08] kauditd_printk_skb: 301 callbacks suppressed
[ +10.105821] kauditd_printk_skb: 203 callbacks suppressed
[ +0.136004] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[ +17.306307] kauditd_printk_skb: 9 callbacks suppressed
[ +16.284370] kauditd_printk_skb: 58 callbacks suppressed
[Jun12 02:09] kauditd_printk_skb: 9 callbacks suppressed
[Jun12 02:20] smpboot: Scheduler frequency invariance went wobbly, disabling!
[Jun12 02:21] done.
[ +0.259323] Bluetooth: hci0: unexpected event for opcode 0xfc2f
[Jun12 02:28] kauditd_printk_skb: 1 callbacks suppressed
[ +6.667590] process 'usr/local/bin/dgraph' started with executable stack
[Jun12 02:47] kauditd_printk_skb: 39 callbacks suppressed
[Jun12 03:15] kauditd_printk_skb: 83 callbacks suppressed
[ +5.103913] kauditd_printk_skb: 507 callbacks suppressed
[Jun12 03:16] kauditd_printk_skb: 8 callbacks suppressed
[ +16.059199] kauditd_printk_skb: 58 callbacks suppressed
[ +10.895986] kauditd_printk_skb: 39 callbacks suppressed
[Jun12 03:18] kauditd_printk_skb: 20 callbacks suppressed
[ +8.065799] kauditd_printk_skb: 20 callbacks suppressed
[Jun12 03:19] kauditd_printk_skb: 50 callbacks suppressed
[Jun12 03:23] kauditd_printk_skb: 29 callbacks suppressed
[Jun12 03:24] kauditd_printk_skb: 83 callbacks suppressed
[ +5.003114] kauditd_printk_skb: 496 callbacks suppressed
[ +10.235945] kauditd_printk_skb: 7 callbacks suppressed
[ +15.867294] kauditd_printk_skb: 8 callbacks suppressed
[Jun12 03:25] kauditd_printk_skb: 58 callbacks suppressed
[ +5.979394] kauditd_printk_skb: 13 callbacks suppressed
[Jun12 03:40] kauditd_printk_skb: 16 callbacks suppressed

  • ==> etcd [21f311d01e19] <==

  • 2021-06-12 22:18:56.956394 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:19:06.956567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:19:16.956441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:19:26.956397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:19:36.956604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:19:46.956361 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:19:56.956513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:20:06.956339 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:20:16.956789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:20:26.956399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:20:36.956264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:20:46.956509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:20:56.956587 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:21:06.956494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:21:16.956524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:21:26.956288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:21:36.956414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:21:46.956244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:21:54.387424 I | mvcc: store.index: compact 704
    2021-06-12 22:21:54.389747 I | mvcc: finished scheduled compaction at 704 (took 1.851615ms)
    2021-06-12 22:21:56.956516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:22:06.956313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:22:16.961264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:22:26.956429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:22:36.956302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:22:46.956414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:22:56.956297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:23:06.956389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:23:16.956327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:23:26.956411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:23:36.956409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:23:46.956588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:23:56.956345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:24:06.956507 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:24:16.956511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:24:26.956281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:24:36.956215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:24:46.956746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:24:56.966994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:25:06.956383 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:25:16.956297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:25:26.956492 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:25:36.956329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:25:46.956340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:25:56.956189 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:26:06.956238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:26:16.956376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:26:26.956298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:26:36.956325 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:26:46.956398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:26:54.404435 I | mvcc: store.index: compact 914
    2021-06-12 22:26:54.406236 I | mvcc: finished scheduled compaction at 914 (took 1.220768ms)
    2021-06-12 22:26:56.956434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:27:06.956368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:27:16.956476 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:27:26.956919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:27:36.956389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:27:46.956557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:27:56.956580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-12 22:28:06.956537 I | etcdserver/api/etcdhttp: /health OK (status code 200)

  • ==> kernel <==

  • 22:28:08 up 20:24, 0 users, load average: 0.75, 0.93, 0.93
    Linux minikube 5.10.41-1-MANJARO Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP PREEMPT Fri May 28 19:10:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    PRETTY_NAME="Ubuntu 20.04.2 LTS"

  • ==> kube-apiserver [7979a48a596d] <==

  • I0612 22:15:42.711963 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:15:42.711992 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:16:12.745091 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:16:12.745174 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:16:12.745202 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:16:51.897368 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:16:51.897445 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:16:51.897466 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:17:31.586855 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:17:31.586949 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:17:31.586972 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:18:14.346876 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:18:14.346974 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:18:14.346998 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:18:45.829365 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:18:45.829420 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:18:45.829434 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:19:21.679055 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:19:21.679157 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:19:21.679190 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:20:01.714762 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:20:01.714846 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:20:01.714866 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:20:39.739793 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:20:39.739888 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:20:39.739913 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:21:18.045961 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:21:18.046039 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:21:18.046060 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:21:55.956875 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:21:55.956972 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:21:55.956998 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    W0612 22:22:05.451943 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
    I0612 22:22:34.065865 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:22:34.065952 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:22:34.065977 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:23:06.109977 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:23:06.110055 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:23:06.110075 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:23:45.216027 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:23:45.216102 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:23:45.216122 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:24:17.312469 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:24:17.312555 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:24:17.312584 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:24:55.448135 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:24:55.448236 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:24:55.448291 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:25:31.249123 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:25:31.249203 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:25:31.249225 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:26:09.494124 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:26:09.494212 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:26:09.494239 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:26:53.033176 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:26:53.033270 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:26:53.033302 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0612 22:27:27.217751 1 client.go:360] parsed scheme: "passthrough"
    I0612 22:27:27.217834 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0612 22:27:27.217856 1 clientconn.go:948] ClientConn switching balancer to "pick_first"

  • ==> kube-controller-manager [6b43427b7868] <==

  • I0612 22:12:17.092088 1 controllermanager.go:554] Started "clusterrole-aggregation"
    I0612 22:12:17.092207 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
    I0612 22:12:17.092240 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
    I0612 22:12:17.339694 1 controllermanager.go:554] Started "root-ca-cert-publisher"
    I0612 22:12:17.340118 1 publisher.go:98] Starting root CA certificate configmap publisher
    I0612 22:12:17.340169 1 shared_informer.go:240] Waiting for caches to sync for crt configmap
    I0612 22:12:17.361044 1 shared_informer.go:247] Caches are synced for job
    W0612 22:12:17.364614 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
    I0612 22:12:17.391078 1 shared_informer.go:247] Caches are synced for ReplicationController
    I0612 22:12:17.391225 1 shared_informer.go:247] Caches are synced for endpoint_slice
    I0612 22:12:17.414878 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
    I0612 22:12:17.414976 1 shared_informer.go:247] Caches are synced for deployment
    I0612 22:12:17.415029 1 shared_informer.go:247] Caches are synced for endpoint
    I0612 22:12:17.415055 1 shared_informer.go:247] Caches are synced for service account
    I0612 22:12:17.418546 1 shared_informer.go:247] Caches are synced for ReplicaSet
    I0612 22:12:17.426651 1 shared_informer.go:247] Caches are synced for PV protection
    I0612 22:12:17.432496 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
    I0612 22:12:17.434277 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
    I0612 22:12:17.436003 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
    I0612 22:12:17.437906 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
    I0612 22:12:17.440081 1 shared_informer.go:247] Caches are synced for GC
    I0612 22:12:17.440429 1 shared_informer.go:247] Caches are synced for crt configmap
    I0612 22:12:17.440794 1 shared_informer.go:247] Caches are synced for bootstrap_signer
    I0612 22:12:17.441999 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
    I0612 22:12:17.442480 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
    I0612 22:12:17.444777 1 shared_informer.go:247] Caches are synced for TTL
    I0612 22:12:17.452051 1 shared_informer.go:247] Caches are synced for namespace
    I0612 22:12:17.452096 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
    I0612 22:12:17.452736 1 shared_informer.go:247] Caches are synced for node
    I0612 22:12:17.452772 1 range_allocator.go:172] Starting range CIDR allocator
    I0612 22:12:17.452783 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
    I0612 22:12:17.452794 1 shared_informer.go:247] Caches are synced for cidrallocator
    I0612 22:12:17.615783 1 shared_informer.go:247] Caches are synced for disruption
    I0612 22:12:17.615845 1 disruption.go:339] Sending events to api server.
    I0612 22:12:17.622667 1 shared_informer.go:247] Caches are synced for stateful set
    I0612 22:12:17.637183 1 shared_informer.go:247] Caches are synced for PVC protection
    I0612 22:12:17.714967 1 shared_informer.go:247] Caches are synced for daemon sets
    I0612 22:12:17.715197 1 shared_informer.go:247] Caches are synced for persistent volume
    I0612 22:12:17.716701 1 shared_informer.go:247] Caches are synced for attach detach
    E0612 22:12:17.719274 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
    I0612 22:12:17.726260 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5xnmn"
    I0612 22:12:17.726969 1 shared_informer.go:247] Caches are synced for taint
    I0612 22:12:17.727262 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
    W0612 22:12:17.727475 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
    I0612 22:12:17.727588 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
    I0612 22:12:17.727728 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
    I0612 22:12:17.727788 1 taint_manager.go:187] Starting NoExecuteTaintManager
    I0612 22:12:17.728309 1 request.go:655] Throttling request took 1.100971524s, request: GET:https://192.168.49.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
    I0612 22:12:17.729463 1 shared_informer.go:247] Caches are synced for expand
    I0612 22:12:17.748738 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
    I0612 22:12:17.924898 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
    I0612 22:12:17.935425 1 shared_informer.go:247] Caches are synced for resource quota
    I0612 22:12:17.939572 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-79d2v"
    I0612 22:12:17.942255 1 shared_informer.go:247] Caches are synced for HPA
    I0612 22:12:18.127328 1 shared_informer.go:247] Caches are synced for garbage collector
    I0612 22:12:18.143095 1 shared_informer.go:247] Caches are synced for garbage collector
    I0612 22:12:18.143155 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
    I0612 22:12:18.478349 1 shared_informer.go:240] Waiting for caches to sync for resource quota
    I0612 22:12:18.478445 1 shared_informer.go:247] Caches are synced for resource quota
    I0612 22:12:22.727974 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.

  • ==> kube-proxy [1c6500f10134] <==

  • I0612 22:12:18.964964 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
    I0612 22:12:18.965021 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
    W0612 22:12:19.002803 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
    I0612 22:12:19.002892 1 server_others.go:185] Using iptables Proxier.
    I0612 22:12:19.003551 1 server.go:650] Version: v1.20.2
    I0612 22:12:19.004119 1 conntrack.go:52] Setting nf_conntrack_max to 262144
    I0612 22:12:19.004234 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
    I0612 22:12:19.004781 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
    I0612 22:12:19.005017 1 config.go:315] Starting service config controller
    I0612 22:12:19.005114 1 shared_informer.go:240] Waiting for caches to sync for service config
    I0612 22:12:19.005026 1 config.go:224] Starting endpoint slice config controller
    I0612 22:12:19.005208 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
    I0612 22:12:19.105619 1 shared_informer.go:247] Caches are synced for endpoint slice config
    I0612 22:12:19.105694 1 shared_informer.go:247] Caches are synced for service config

  • ==> kube-scheduler [354e894575b1] <==

  • I0612 22:11:52.664853 1 serving.go:331] Generated self-signed cert in-memory
    W0612 22:11:57.625028 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
    W0612 22:11:57.625081 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
    W0612 22:11:57.625126 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
    W0612 22:11:57.625145 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
    I0612 22:11:57.914789 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0612 22:11:57.915180 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0612 22:11:57.924930 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
    I0612 22:11:57.925721 1 tlsconfig.go:240] Starting DynamicServingCertificateController
    E0612 22:11:58.022740 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
    E0612 22:11:58.023023 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    E0612 22:11:58.023317 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
    E0612 22:11:58.023634 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
    E0612 22:11:58.023991 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
    E0612 22:11:58.024406 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
    E0612 22:11:58.024753 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
    E0612 22:11:58.025105 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
    E0612 22:11:58.025100 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    E0612 22:11:58.025479 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
    E0612 22:11:58.025799 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
    E0612 22:11:58.026203 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
    E0612 22:11:59.026368 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
    E0612 22:11:59.064388 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
    E0612 22:11:59.084504 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
    E0612 22:11:59.147739 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    E0612 22:11:59.201220 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
    E0612 22:11:59.250529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
    E0612 22:11:59.328995 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
    E0612 22:11:59.381636 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
    E0612 22:11:59.493949 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
    E0612 22:11:59.508788 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
    E0612 22:11:59.587564 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    I0612 22:12:02.515698 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

  • ==> kubelet <==

  • -- Logs begin at Sat 2021-06-12 22:11:28 UTC, end at Sat 2021-06-12 22:28:08 UTC. --
    Jun 12 22:12:38 minikube kubelet[2565]: I0612 22:12:38.105842 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8" (UID: "b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
    Jun 12 22:12:38 minikube kubelet[2565]: I0612 22:12:38.203599 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8-default-token-cvtkw") on node "minikube" DevicePath ""
    Jun 12 22:12:38 minikube kubelet[2565]: W0612 22:12:38.985969 2565 pod_container_deletor.go:79] Container "cb6ffd7fa039beba4cd195332819f5922b36230a1afa952151a23461607b2f6c" not found in pod's containers
    Jun 12 22:12:40 minikube kubelet[2565]: W0612 22:12:40.072901 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8/volumes" does not exist
    Jun 12 22:13:10 minikube kubelet[2565]: I0612 22:13:10.033294 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: b975f30646965227d0b4fd6dd04198e224148d7ced8a39ba31d9876265772c57
    Jun 12 22:13:10 minikube kubelet[2565]: I0612 22:13:10.076075 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2fe52dc4fb65cfe1000206d03d21f32baf21d44fde30a0c132d09941396e5f4e
    Jun 12 22:14:12 minikube kubelet[2565]: I0612 22:14:12.712977 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 12 22:14:12 minikube kubelet[2565]: I0612 22:14:12.823349 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw") pod "busybox" (UID: "816dc660-9a88-415c-9d84-b85bb61feb42")
    Jun 12 22:14:13 minikube kubelet[2565]: W0612 22:14:13.665765 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:14:14 minikube kubelet[2565]: W0612 22:14:14.011475 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:14:15 minikube kubelet[2565]: W0612 22:14:15.022965 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:14:15 minikube kubelet[2565]: E0612 22:14:15.080410 2565 remote_runtime.go:251] StartContainer "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08" from runtime service failed: rpc error: code = Unknown desc = failed to start container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "curl": executable file not found in $PATH: unknown
    Jun 12 22:14:15 minikube kubelet[2565]: E0612 22:14:15.080611 2565 kuberuntime_manager.go:829] container &Container{Name:busybox,Image:busybox,Command:[curl google.com],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cvtkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod busybox_default(816dc660-9a88-415c-9d84-b85bb61feb42): RunContainerError: failed to start container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "curl": executable file not found in $PATH: unknown
    Jun 12 22:14:15 minikube kubelet[2565]: E0612 22:14:15.080699 2565 pod_workers.go:191] Error syncing pod 816dc660-9a88-415c-9d84-b85bb61feb42 ("busybox_default(816dc660-9a88-415c-9d84-b85bb61feb42)"), skipping: failed to "StartContainer" for "busybox" with RunContainerError: "failed to start container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "curl": executable file not found in $PATH: unknown"
    Jun 12 22:14:16 minikube kubelet[2565]: I0612 22:14:16.234508 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw") pod "816dc660-9a88-415c-9d84-b85bb61feb42" (UID: "816dc660-9a88-415c-9d84-b85bb61feb42")
    Jun 12 22:14:16 minikube kubelet[2565]: I0612 22:14:16.239971 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "816dc660-9a88-415c-9d84-b85bb61feb42" (UID: "816dc660-9a88-415c-9d84-b85bb61feb42"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
    Jun 12 22:14:16 minikube kubelet[2565]: I0612 22:14:16.334976 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw") on node "minikube" DevicePath ""
    Jun 12 22:14:17 minikube kubelet[2565]: W0612 22:14:17.126433 2565 pod_container_deletor.go:79] Container "f5ec9f8c5e0ce8bef3c60c7bc9b7fe46f401abbd102b7a709fbe8c1238d16a67" not found in pod's containers
    Jun 12 22:14:18 minikube kubelet[2565]: W0612 22:14:18.072658 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/816dc660-9a88-415c-9d84-b85bb61feb42/volumes" does not exist
    Jun 12 22:14:32 minikube kubelet[2565]: I0612 22:14:32.626728 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 12 22:14:32 minikube kubelet[2565]: I0612 22:14:32.790001 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw") pod "busybox" (UID: "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d")
    Jun 12 22:14:33 minikube kubelet[2565]: W0612 22:14:33.569943 2565 pod_container_deletor.go:79] Container "a3208906e675de65a3b2114554d454517bf1cc88a2b1245a4d4c973816512628" not found in pod's containers
    Jun 12 22:14:33 minikube kubelet[2565]: W0612 22:14:33.570464 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:14:34 minikube kubelet[2565]: W0612 22:14:34.586008 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:14:35 minikube kubelet[2565]: W0612 22:14:35.604881 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.611826 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 290ff0f288f7abbc2619938eee4a39b450da1fea246aad744eb416d60e22f58d
    Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.699218 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw") pod "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d" (UID: "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d")
    Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.702734 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d" (UID: "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
    Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.799601 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw") on node "minikube" DevicePath ""
    Jun 12 22:14:36 minikube kubelet[2565]: W0612 22:14:36.071640 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d/volumes" does not exist
    Jun 12 22:14:36 minikube kubelet[2565]: W0612 22:14:36.635628 2565 pod_container_deletor.go:79] Container "a3208906e675de65a3b2114554d454517bf1cc88a2b1245a4d4c973816512628" not found in pod's containers
    Jun 12 22:15:10 minikube kubelet[2565]: I0612 22:15:10.223398 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 290ff0f288f7abbc2619938eee4a39b450da1fea246aad744eb416d60e22f58d
    Jun 12 22:15:10 minikube kubelet[2565]: I0612 22:15:10.261528 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08
    Jun 12 22:15:11 minikube kubelet[2565]: W0612 22:15:11.046742 2565 pod_container_deletor.go:79] Container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08" not found in pod's containers
    Jun 12 22:16:31 minikube kubelet[2565]: I0612 22:16:31.290739 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 12 22:16:31 minikube kubelet[2565]: I0612 22:16:31.407712 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw") pod "busybox" (UID: "526cc68b-311f-4bf7-98ef-008d1bdafa36")
    Jun 12 22:16:32 minikube kubelet[2565]: W0612 22:16:32.185562 2565 pod_container_deletor.go:79] Container "c765953fb20df9511007d0b062b29a07793bcbd69ff82f033ceb58fdddc942a5" not found in pod's containers
    Jun 12 22:16:32 minikube kubelet[2565]: W0612 22:16:32.185862 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:16:33 minikube kubelet[2565]: W0612 22:16:33.199599 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:16:40 minikube kubelet[2565]: W0612 22:16:40.346614 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:16:41 minikube kubelet[2565]: W0612 22:16:41.559581 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:16:41 minikube kubelet[2565]: I0612 22:16:41.742509 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw") pod "526cc68b-311f-4bf7-98ef-008d1bdafa36" (UID: "526cc68b-311f-4bf7-98ef-008d1bdafa36")
    Jun 12 22:16:41 minikube kubelet[2565]: I0612 22:16:41.748189 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "526cc68b-311f-4bf7-98ef-008d1bdafa36" (UID: "526cc68b-311f-4bf7-98ef-008d1bdafa36"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
    Jun 12 22:16:41 minikube kubelet[2565]: I0612 22:16:41.842947 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw") on node "minikube" DevicePath ""
    Jun 12 22:16:42 minikube kubelet[2565]: W0612 22:16:42.072494 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/526cc68b-311f-4bf7-98ef-008d1bdafa36/volumes" does not exist
    Jun 12 22:16:42 minikube kubelet[2565]: W0612 22:16:42.591220 2565 pod_container_deletor.go:79] Container "c765953fb20df9511007d0b062b29a07793bcbd69ff82f033ceb58fdddc942a5" not found in pod's containers
    Jun 12 22:17:10 minikube kubelet[2565]: I0612 22:17:10.400708 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a2fdde8c6686d35ba40bebf98961d2460de1edd3c0775945a2d2b473995d85
    Jun 12 22:23:50 minikube kubelet[2565]: I0612 22:23:50.141574 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 12 22:23:50 minikube kubelet[2565]: I0612 22:23:50.291756 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw") pod "busybox" (UID: "46735029-d9b3-4a13-af68-4168b570b317")
    Jun 12 22:23:51 minikube kubelet[2565]: W0612 22:23:51.104887 2565 pod_container_deletor.go:79] Container "5097e60d2f5a55cf133bc9f1fb752817b6f8ff8299cdb0763d7650289f06ba5d" not found in pod's containers
    Jun 12 22:23:51 minikube kubelet[2565]: W0612 22:23:51.105503 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:23:52 minikube kubelet[2565]: W0612 22:23:52.128694 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:23:53 minikube kubelet[2565]: W0612 22:23:53.145307 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.151878 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: fd5a9796d33a9ae99b585b01a545112d5dc1e856a705b56e8815e484c40c23d5
    Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.300953 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw") pod "46735029-d9b3-4a13-af68-4168b570b317" (UID: "46735029-d9b3-4a13-af68-4168b570b317")
    Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.303798 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "46735029-d9b3-4a13-af68-4168b570b317" (UID: "46735029-d9b3-4a13-af68-4168b570b317"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
    Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.401330 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw") on node "minikube" DevicePath ""
    Jun 12 22:23:54 minikube kubelet[2565]: W0612 22:23:54.071557 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/46735029-d9b3-4a13-af68-4168b570b317/volumes" does not exist
    Jun 12 22:23:54 minikube kubelet[2565]: W0612 22:23:54.172782 2565 pod_container_deletor.go:79] Container "5097e60d2f5a55cf133bc9f1fb752817b6f8ff8299cdb0763d7650289f06ba5d" not found in pod's containers
    Jun 12 22:24:10 minikube kubelet[2565]: I0612 22:24:10.569026 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: fd5a9796d33a9ae99b585b01a545112d5dc1e856a705b56e8815e484c40c23d5

  • ==> storage-provisioner [fc4f50e60692] <==

  • I0612 22:12:27.560836 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
    I0612 22:12:27.573720 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
    I0612 22:12:27.573760 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
    I0612 22:12:27.583017 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
    I0612 22:12:27.583071 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d394ff4-de43-4213-9917-1d4e872e89fc", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_9487ce1c-c4e7-40cb-bc3d-8a773325a3f6 became leader
    I0612 22:12:27.583185 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_9487ce1c-c4e7-40cb-bc3d-8a773325a3f6!
    I0612 22:12:27.683451 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_9487ce1c-c4e7-40cb-bc3d-8a773325a3f6!

Note: Output of commands other that minikube start placed below the command.

@medyagh
Copy link
Member

medyagh commented Jun 14, 2021

@bzvestey do u have this problem only on arch linux ?

@afbjorklund afbjorklund added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jun 14, 2021
@bzvestey
Copy link
Author

bzvestey commented Jun 14, 2021

For the information above I was specifically using Manjaro (downstream of Arch), in case that helps. I have tested three other configurations today and this is the results:

  • Windows 10/WLS 2 (Kali Linux): Unable to reproduce
  • MacOS Big Sur: Unable to reproduce
  • Ubuntu 18.04 headless server: Reproduce issue, logs and command output for this box below.

Minikube logs

* * ==> Audit <== * |--------------|------|----------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |--------------|------|----------|---------|---------|-------------------------------|-------------------------------| | update-check | | minikube | bvestey | v1.20.0 | Mon, 24 May 2021 11:24:38 PDT | Mon, 24 May 2021 11:24:38 PDT | | update-check | | minikube | bvestey | v1.20.0 | Thu, 10 Jun 2021 15:24:32 PDT | Thu, 10 Jun 2021 15:24:32 PDT | | start | | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:21:14 PDT | Mon, 14 Jun 2021 15:22:46 PDT | | ssh | | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:22:50 PDT | Mon, 14 Jun 2021 15:22:58 PDT | | help | | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:23:50 PDT | Mon, 14 Jun 2021 15:23:50 PDT | | help | logs | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:24:01 PDT | Mon, 14 Jun 2021 15:24:01 PDT | |--------------|------|----------|---------|---------|-------------------------------|-------------------------------|
  • ==> Last Start <==
  • Log file created at: 2021/06/14 15:21:14
    Running on machine: bvestey-dev
    Binary: Built with gc go1.16.4 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0614 15:21:14.368965 7997 out.go:291] Setting OutFile to fd 1 ...
    I0614 15:21:14.369086 7997 out.go:343] isatty.IsTerminal(1) = true
    I0614 15:21:14.369090 7997 out.go:304] Setting ErrFile to fd 2...
    I0614 15:21:14.369096 7997 out.go:343] isatty.IsTerminal(2) = true
    I0614 15:21:14.369256 7997 root.go:316] Updating PATH: /home/bvestey/.minikube/bin
    W0614 15:21:14.369498 7997 root.go:291] Error reading config file at /home/bvestey/.minikube/config/config.json: open /home/bvestey/.minikube/config/config.json: no such file or directory
    I0614 15:21:14.369685 7997 out.go:298] Setting JSON to false
    I0614 15:21:14.391198 7997 start.go:111] hostinfo: {"hostname":"bvestey-dev.corp.maana.io","uptime":62,"bootTime":1623709212,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-62-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"f8d80b7f-5cd2-42d8-a78c-0c36f9995cfb"}
    I0614 15:21:14.391289 7997 start.go:121] virtualization: kvm host
    I0614 15:21:14.411149 7997 out.go:170] 😄 minikube v1.21.0 on Ubuntu 18.04
    I0614 15:21:14.411264 7997 notify.go:169] Checking for updates...
    I0614 15:21:14.411588 7997 driver.go:335] Setting default libvirt URI to qemu:///system
    I0614 15:21:14.411620 7997 global.go:111] Querying for installed drivers using PATH=/home/bvestey/.minikube/bin:/home/bvestey/.cargo/bin:/home/bvestey/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/bvestey/dev/go/bin:/home/bvestey/bin:/home/bvestey/bin/go/bin
    I0614 15:21:14.411762 7997 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
    I0614 15:21:15.080954 7997 docker.go:132] docker version: linux-19.03.2
    I0614 15:21:15.081056 7997 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0614 15:21:15.522131 7997 info.go:261] docker info: {ID:JPCH:QJE3:NMF6:BL6U:MYOE:X3H5:H3OQ:FGHM:XWLR:VCKB:Q5JP:CESP Containers:20 ContainersRunning:19 ContainersPaused:0 ContainersStopped:1 Images:455 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:216 OomKillDisable:true NGoroutines:390 SystemTime:2021-06-14 15:21:15.196837248 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:19 KernelVersion:4.15.0-62-generic OperatingSystem:Ubuntu 18.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:67474944000 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bvestey-dev.corp.maana.io Labels:[] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID:kd8zqztymk3jrdim0yktje0hv NodeAddr:192.168.13.84 LocalNodeState:active ControlAvailable:true Error: RemoteManagers:[map[Addr:192.168.13.84:2377 NodeID:kd8zqztymk3jrdim0yktje0hv]]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:894b81a4b802e4eb2a91d1ce216b8817763c29fb Expected:894b81a4b802e4eb2a91d1ce216b8817763c29fb} RuncCommit:{ID:425e105d5a03fabd737a126ad93d62a9eeede87f Expected:425e105d5a03fabd737a126ad93d62a9eeede87f} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}}
    I0614 15:21:15.522296 7997 docker.go:244] overlay module found
    I0614 15:21:15.522317 7997 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0614 15:21:15.522519 7997 global.go:119] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
    I0614 15:21:15.552068 7997 global.go:119] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0614 15:21:15.552181 7997 global.go:119] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
    I0614 15:21:15.552195 7997 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0614 15:21:15.552262 7997 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
    I0614 15:21:15.552277 7997 driver.go:270] not recommending "none" due to default: false
    I0614 15:21:15.552283 7997 driver.go:270] not recommending "ssh" due to default: false
    I0614 15:21:15.552296 7997 driver.go:305] Picked: docker
    I0614 15:21:15.552303 7997 driver.go:306] Alternatives: [none ssh]
    I0614 15:21:15.552308 7997 driver.go:307] Rejects: [vmware kvm2 podman virtualbox]
    I0614 15:21:15.578711 7997 out.go:170] ✨ Automatically selected the docker driver. Other choices: none, ssh
    I0614 15:21:15.578743 7997 start.go:279] selected driver: docker
    I0614 15:21:15.578749 7997 start.go:752] validating driver "docker" against
    I0614 15:21:15.578772 7997 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0614 15:21:15.578865 7997 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0614 15:21:15.774462 7997 info.go:261] docker info: {ID:JPCH:QJE3:NMF6:BL6U:MYOE:X3H5:H3OQ:FGHM:XWLR:VCKB:Q5JP:CESP Containers:20 ContainersRunning:19 ContainersPaused:0 ContainersStopped:1 Images:455 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:216 OomKillDisable:true NGoroutines:388 SystemTime:2021-06-14 15:21:15.645502558 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:19 KernelVersion:4.15.0-62-generic OperatingSystem:Ubuntu 18.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:67474944000 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bvestey-dev.corp.maana.io Labels:[] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID:kd8zqztymk3jrdim0yktje0hv NodeAddr:192.168.13.84 LocalNodeState:active ControlAvailable:true Error: RemoteManagers:[map[Addr:192.168.13.84:2377 NodeID:kd8zqztymk3jrdim0yktje0hv]]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:894b81a4b802e4eb2a91d1ce216b8817763c29fb Expected:894b81a4b802e4eb2a91d1ce216b8817763c29fb} RuncCommit:{ID:425e105d5a03fabd737a126ad93d62a9eeede87f Expected:425e105d5a03fabd737a126ad93d62a9eeede87f} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}}
    I0614 15:21:15.774585 7997 start_flags.go:259] no existing cluster config was found, will generate one from the flags
    I0614 15:21:15.775544 7997 start_flags.go:311] Using suggested 16000MB memory alloc based on sys=64349MB, container=64349MB
    I0614 15:21:15.775718 7997 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
    I0614 15:21:15.775737 7997 cni.go:93] Creating CNI manager for ""
    I0614 15:21:15.775743 7997 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
    I0614 15:21:15.775751 7997 start_flags.go:273] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:16000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
    I0614 15:21:15.788304 7997 out.go:170] 👍 Starting control plane node minikube in cluster minikube
    I0614 15:21:15.788354 7997 cache.go:115] Beginning downloading kic base image for docker with docker
    I0614 15:21:15.797910 7997 out.go:170] 🚜 Pulling base image ...
    I0614 15:21:15.797951 7997 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
    I0614 15:21:15.798025 7997 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
    I0614 15:21:15.798251 7997 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
    I0614 15:21:15.798884 7997 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
    I0614 15:21:15.858785 7997 preload.go:145] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
    I0614 15:21:15.858805 7997 cache.go:54] Caching tarball of preloaded images
    I0614 15:21:15.858939 7997 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
    I0614 15:21:15.869801 7997 out.go:170] 💾 Downloading Kubernetes v1.20.7 preload ...
    I0614 15:21:15.869830 7997 preload.go:230] getting checksum for preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
    I0614 15:21:15.952244 7997 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4?checksum=md5:f41702d59ddd4fa1749fa672343212b9 -> /home/bvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
    I0614 15:21:24.473672 7997 preload.go:240] saving checksum for preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
    I0614 15:21:24.473730 7997 preload.go:247] verifying checksumm of /home/bvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
    I0614 15:21:25.367830 7997 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on docker
    I0614 15:21:25.368073 7997 profile.go:148] Saving config to /home/bvestey/.minikube/profiles/minikube/config.json ...
    I0614 15:21:25.368090 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/config.json: {Name:mkbbf96c0a32febac8460f57e4c333eec76f5856 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0614 15:21:26.201373 7997 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
    I0614 15:21:26.201385 7997 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
    I0614 15:21:26.354075 7997 cache.go:156] Loading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 from local cache
    I0614 15:21:43.415349 7997 cache.go:159] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 from cached tarball
    I0614 15:21:43.415369 7997 cache.go:202] Successfully downloaded all kic artifacts
    I0614 15:21:43.415405 7997 start.go:313] acquiring machines lock for minikube: {Name:mkc54d471379ae113085fbe60acba66a1c6e1b0a Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0614 15:21:43.415525 7997 start.go:317] acquired machines lock for "minikube" in 104.719µs
    I0614 15:21:43.418792 7997 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:16000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
    I0614 15:21:43.418856 7997 start.go:126] createHost starting for "" (driver="docker")
    I0614 15:21:43.425907 7997 out.go:197] 🔥 Creating docker container (CPUs=2, Memory=16000MB) ...
    I0614 15:21:43.426186 7997 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0614 15:21:43.426221 7997 client.go:168] LocalClient.Create starting
    I0614 15:21:43.434209 7997 main.go:128] libmachine: Creating CA: /home/bvestey/.minikube/certs/ca.pem
    I0614 15:21:43.510736 7997 main.go:128] libmachine: Creating client certificate: /home/bvestey/.minikube/certs/cert.pem
    I0614 15:21:43.606727 7997 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
    W0614 15:21:43.651527 7997 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
    I0614 15:21:43.651584 7997 network_create.go:255] running [docker network inspect minikube] to gather additional debugging logs...
    I0614 15:21:43.651598 7997 cli_runner.go:115] Run: docker network inspect minikube
    W0614 15:21:43.696491 7997 cli_runner.go:162] docker network inspect minikube returned with exit code 1
    I0614 15:21:43.696509 7997 network_create.go:258] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
    stdout:
    []

stderr:
Error: No such network: minikube
I0614 15:21:43.696519 7997 network_create.go:260] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I0614 15:21:43.696567 7997 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0614 15:21:43.747823 7997 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001ba7c0] misses:0}
I0614 15:21:43.747865 7997 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0614 15:21:43.747881 7997 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0614 15:21:43.747943 7997 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0614 15:21:43.835449 7997 network_create.go:90] docker network minikube 192.168.49.0/24 created
I0614 15:21:43.835466 7997 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0614 15:21:43.835559 7997 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0614 15:21:43.893567 7997 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0614 15:21:43.932954 7997 oci.go:102] Successfully created a docker volume minikube
I0614 15:21:43.933032 7997 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0614 15:21:51.588211 7997 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib: (7.655134292s)
I0614 15:21:51.588230 7997 oci.go:106] Successfully prepared a docker volume minikube
W0614 15:21:51.588261 7997 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0614 15:21:51.588266 7997 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0614 15:21:51.588304 7997 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0614 15:21:51.588323 7997 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0614 15:21:51.588331 7997 kic.go:179] Starting extracting preloaded images to volume ...
I0614 15:21:51.588485 7997 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/bvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0614 15:21:51.729008 7997 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0614 15:21:53.010948 7997 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45: (1.281865365s)
I0614 15:21:53.011039 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0614 15:21:53.054099 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0614 15:21:53.097880 7997 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0614 15:21:53.326478 7997 oci.go:278] the created container "minikube" has a running status.
I0614 15:21:53.326497 7997 kic.go:210] Creating ssh key for kic: /home/bvestey/.minikube/machines/minikube/id_rsa...
I0614 15:21:53.671607 7997 kic_runner.go:188] docker (temp): /home/bvestey/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0614 15:21:59.241339 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0614 15:21:59.289523 7997 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0614 15:21:59.289535 7997 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0614 15:22:00.776383 7997 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/bvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (9.187844123s)
I0614 15:22:00.776409 7997 kic.go:188] duration metric: took 9.188076 seconds to extract preloaded images to volume
I0614 15:22:00.818696 7997 kic_runner.go:124] Done: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]: (1.529134698s)
I0614 15:22:00.818785 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0614 15:22:00.870967 7997 machine.go:88] provisioning docker machine ...
I0614 15:22:00.871016 7997 ubuntu.go:169] provisioning hostname "minikube"
I0614 15:22:00.871108 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:00.919329 7997 main.go:128] libmachine: Using SSH client type: native
I0614 15:22:00.919629 7997 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 32772 }
I0614 15:22:00.919647 7997 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0614 15:22:01.067980 7997 main.go:128] libmachine: SSH cmd err, output: : minikube

I0614 15:22:01.068060 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:01.126041 7997 main.go:128] libmachine: Using SSH client type: native
I0614 15:22:01.126265 7997 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 32772 }
I0614 15:22:01.126285 7997 main.go:128] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0614 15:22:01.267891 7997 main.go:128] libmachine: SSH cmd err, output: :
I0614 15:22:01.267915 7997 ubuntu.go:175] set auth options {CertDir:/home/bvestey/.minikube CaCertPath:/home/bvestey/.minikube/certs/ca.pem CaPrivateKeyPath:/home/bvestey/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/bvestey/.minikube/machines/server.pem ServerKeyPath:/home/bvestey/.minikube/machines/server-key.pem ClientKeyPath:/home/bvestey/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/bvestey/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/bvestey/.minikube}
I0614 15:22:01.267939 7997 ubuntu.go:177] setting up certificates
I0614 15:22:01.267950 7997 provision.go:83] configureAuth start
I0614 15:22:01.268057 7997 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0614 15:22:01.322600 7997 provision.go:137] copyHostCerts
I0614 15:22:01.322672 7997 exec_runner.go:152] cp: /home/bvestey/.minikube/certs/ca.pem --> /home/bvestey/.minikube/ca.pem (1078 bytes)
I0614 15:22:01.322782 7997 exec_runner.go:152] cp: /home/bvestey/.minikube/certs/cert.pem --> /home/bvestey/.minikube/cert.pem (1123 bytes)
I0614 15:22:01.322843 7997 exec_runner.go:152] cp: /home/bvestey/.minikube/certs/key.pem --> /home/bvestey/.minikube/key.pem (1679 bytes)
I0614 15:22:01.322886 7997 provision.go:111] generating server cert: /home/bvestey/.minikube/machines/server.pem ca-key=/home/bvestey/.minikube/certs/ca.pem private-key=/home/bvestey/.minikube/certs/ca-key.pem org=bvestey.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0614 15:22:01.730063 7997 provision.go:171] copyRemoteCerts
I0614 15:22:01.730102 7997 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0614 15:22:01.730136 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:01.773980 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:01.878353 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0614 15:22:01.909772 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
I0614 15:22:01.934798 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0614 15:22:01.962210 7997 provision.go:86] duration metric: configureAuth took 694.246876ms
I0614 15:22:01.962228 7997 ubuntu.go:193] setting minikube options for container-runtime
I0614 15:22:01.962499 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:02.007236 7997 main.go:128] libmachine: Using SSH client type: native
I0614 15:22:02.007467 7997 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 32772 }
I0614 15:22:02.007481 7997 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0614 15:22:02.145902 7997 main.go:128] libmachine: SSH cmd err, output: : overlay

I0614 15:22:02.145924 7997 ubuntu.go:71] root file system type: overlay
I0614 15:22:02.146160 7997 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0614 15:22:02.146221 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:02.194723 7997 main.go:128] libmachine: Using SSH client type: native
I0614 15:22:02.194973 7997 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 32772 }
I0614 15:22:02.195099 7997 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0614 15:22:02.338797 7997 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0614 15:22:02.338923 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:02.386835 7997 main.go:128] libmachine: Using SSH client type: native
I0614 15:22:02.387100 7997 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 32772 }
I0614 15:22:02.387133 7997 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0614 15:22:03.352908 7997 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-14 22:22:02.332601110 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0614 15:22:03.352931 7997 machine.go:91] provisioned docker machine in 2.481951764s
I0614 15:22:03.352944 7997 client.go:171] LocalClient.Create took 19.926715712s
I0614 15:22:03.352953 7997 start.go:168] duration metric: libmachine.API.Create for "minikube" took 19.926768014s
I0614 15:22:03.352960 7997 start.go:267] post-start starting for "minikube" (driver="docker")
I0614 15:22:03.352965 7997 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0614 15:22:03.353023 7997 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0614 15:22:03.353069 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:03.398834 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:03.501493 7997 ssh_runner.go:149] Run: cat /etc/os-release
I0614 15:22:03.504159 7997 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0614 15:22:03.504171 7997 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0614 15:22:03.504179 7997 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0614 15:22:03.504183 7997 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0614 15:22:03.504189 7997 filesync.go:126] Scanning /home/bvestey/.minikube/addons for local assets ...
I0614 15:22:03.507019 7997 filesync.go:126] Scanning /home/bvestey/.minikube/files for local assets ...
I0614 15:22:03.507219 7997 start.go:270] post-start completed in 154.253043ms
I0614 15:22:03.507468 7997 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0614 15:22:03.550540 7997 profile.go:148] Saving config to /home/bvestey/.minikube/profiles/minikube/config.json ...
I0614 15:22:03.550784 7997 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0614 15:22:03.550815 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:03.592281 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:03.684723 7997 start.go:129] duration metric: createHost completed in 20.265846432s
I0614 15:22:03.684743 7997 start.go:80] releasing machines lock for "minikube", held for 20.26920852s
I0614 15:22:03.684858 7997 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0614 15:22:03.728308 7997 ssh_runner.go:149] Run: systemctl --version
I0614 15:22:03.728359 7997 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0614 15:22:03.728394 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:03.728451 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:03.789914 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:03.802146 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:03.888606 7997 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0614 15:22:03.989001 7997 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0614 15:22:04.007576 7997 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0614 15:22:04.007632 7997 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0614 15:22:04.021904 7997 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0614 15:22:04.040834 7997 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0614 15:22:04.137664 7997 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0614 15:22:04.229899 7997 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0614 15:22:04.243651 7997 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0614 15:22:04.338236 7997 ssh_runner.go:149] Run: sudo systemctl start docker
I0614 15:22:04.351729 7997 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0614 15:22:04.409838 7997 out.go:197] 🐳 Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0614 15:22:04.409936 7997 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0614 15:22:04.456631 7997 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0614 15:22:04.460151 7997 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0614 15:22:04.473443 7997 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0614 15:22:04.473597 7997 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0614 15:22:04.523768 7997 docker.go:535] Got preloaded images: -- stdout --
:
:
:
:
:
:
:
:
:
:

-- /stdout --
I0614 15:22:04.523788 7997 docker.go:541] k8s.gcr.io/kube-apiserver:v1.20.7 wasn't preloaded
I0614 15:22:04.523888 7997 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0614 15:22:04.532246 7997 ssh_runner.go:149] Run: which lz4
I0614 15:22:04.536509 7997 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0614 15:22:04.539897 7997 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0614 15:22:04.539910 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (516105449 bytes)
I0614 15:22:05.492421 7997 docker.go:500] Took 0.955956 seconds to copy over tarball
I0614 15:22:05.492484 7997 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0614 15:22:08.049434 7997 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.556920718s)
I0614 15:22:08.049495 7997 ssh_runner.go:100] rm: /preloaded.tar.lz4
I0614 15:22:08.122604 7997 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0614 15:22:08.129954 7997 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
I0614 15:22:08.146843 7997 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0614 15:22:08.233110 7997 ssh_runner.go:149] Run: sudo systemctl restart docker
I0614 15:22:13.151735 7997 ssh_runner.go:189] Completed: sudo systemctl restart docker: (4.91858551s)
I0614 15:22:13.151888 7997 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0614 15:22:13.205739 7997 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0614 15:22:13.205756 7997 cache_images.go:74] Images are preloaded, skipping loading
I0614 15:22:13.205825 7997 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0614 15:22:13.319073 7997 cni.go:93] Creating CNI manager for ""
I0614 15:22:13.319083 7997 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0614 15:22:13.319088 7997 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0614 15:22:13.319098 7997 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0614 15:22:13.319234 7997 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 192.168.49.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0

I0614 15:22:13.319344 7997 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0614 15:22:13.319397 7997 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0614 15:22:13.327059 7997 binaries.go:44] Found k8s binaries, skipping transfer
I0614 15:22:13.327140 7997 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0614 15:22:13.335215 7997 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0614 15:22:13.352917 7997 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0614 15:22:13.371035 7997 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes)
I0614 15:22:13.388580 7997 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0614 15:22:13.393227 7997 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0614 15:22:13.408379 7997 certs.go:52] Setting up /home/bvestey/.minikube/profiles/minikube for IP: 192.168.49.2
I0614 15:22:13.408430 7997 certs.go:183] generating minikubeCA CA: /home/bvestey/.minikube/ca.key
I0614 15:22:13.582376 7997 crypto.go:157] Writing cert to /home/bvestey/.minikube/ca.crt ...
I0614 15:22:13.582387 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/ca.crt: {Name:mkfe4b9293d69861323f89ec00e91c9c5e7b1bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:13.582529 7997 crypto.go:165] Writing key to /home/bvestey/.minikube/ca.key ...
I0614 15:22:13.582535 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/ca.key: {Name:mka59574ea7cd63b4d2e72d28d5c441f0d4977d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:13.582615 7997 certs.go:183] generating proxyClientCA CA: /home/bvestey/.minikube/proxy-client-ca.key
I0614 15:22:13.745270 7997 crypto.go:157] Writing cert to /home/bvestey/.minikube/proxy-client-ca.crt ...
I0614 15:22:13.745279 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/proxy-client-ca.crt: {Name:mkb0eb7fcb50f93913ef352acc30f0b963f8a9d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:13.745395 7997 crypto.go:165] Writing key to /home/bvestey/.minikube/proxy-client-ca.key ...
I0614 15:22:13.745400 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/proxy-client-ca.key: {Name:mkc9195d3513fcf1ebd05938c5623af2604a79be Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:13.745510 7997 certs.go:294] generating minikube-user signed cert: /home/bvestey/.minikube/profiles/minikube/client.key
I0614 15:22:13.745518 7997 crypto.go:69] Generating cert /home/bvestey/.minikube/profiles/minikube/client.crt with IP's: []
I0614 15:22:14.006543 7997 crypto.go:157] Writing cert to /home/bvestey/.minikube/profiles/minikube/client.crt ...
I0614 15:22:14.006554 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/client.crt: {Name:mk59fd512ed92fded878631d57585e2e25c51f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:14.006687 7997 crypto.go:165] Writing key to /home/bvestey/.minikube/profiles/minikube/client.key ...
I0614 15:22:14.006692 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/client.key: {Name:mk91c17300ae9f2bdecfdd1509ed7014ddb8f24b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:14.006767 7997 certs.go:294] generating minikube signed cert: /home/bvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0614 15:22:14.006773 7997 crypto.go:69] Generating cert /home/bvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0614 15:22:14.328527 7997 crypto.go:157] Writing cert to /home/bvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0614 15:22:14.328538 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkd393d7f5a90855dd7940cb3df4cb2a6f618692 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:14.328648 7997 crypto.go:165] Writing key to /home/bvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0614 15:22:14.328652 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkfd2ef953914c14c10da3322eec0d894603201a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:14.328716 7997 certs.go:305] copying /home/bvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/bvestey/.minikube/profiles/minikube/apiserver.crt
I0614 15:22:14.328760 7997 certs.go:309] copying /home/bvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/bvestey/.minikube/profiles/minikube/apiserver.key
I0614 15:22:14.328798 7997 certs.go:294] generating aggregator signed cert: /home/bvestey/.minikube/profiles/minikube/proxy-client.key
I0614 15:22:14.328801 7997 crypto.go:69] Generating cert /home/bvestey/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0614 15:22:14.753396 7997 crypto.go:157] Writing cert to /home/bvestey/.minikube/profiles/minikube/proxy-client.crt ...
I0614 15:22:14.753406 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/proxy-client.crt: {Name:mk14c29ef639aad78b8c0147136d67f6cd2cb316 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:14.753535 7997 crypto.go:165] Writing key to /home/bvestey/.minikube/profiles/minikube/proxy-client.key ...
I0614 15:22:14.753540 7997 lock.go:36] WriteFile acquiring /home/bvestey/.minikube/profiles/minikube/proxy-client.key: {Name:mke04b9ec85e690932d715ac7db430d81caf589b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:14.753691 7997 certs.go:369] found cert: /home/bvestey/.minikube/certs/home/bvestey/.minikube/certs/ca-key.pem (1675 bytes)
I0614 15:22:14.753716 7997 certs.go:369] found cert: /home/bvestey/.minikube/certs/home/bvestey/.minikube/certs/ca.pem (1078 bytes)
I0614 15:22:14.753736 7997 certs.go:369] found cert: /home/bvestey/.minikube/certs/home/bvestey/.minikube/certs/cert.pem (1123 bytes)
I0614 15:22:14.753754 7997 certs.go:369] found cert: /home/bvestey/.minikube/certs/home/bvestey/.minikube/certs/key.pem (1679 bytes)
I0614 15:22:14.754551 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0614 15:22:14.785150 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0614 15:22:14.807490 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0614 15:22:14.854785 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0614 15:22:14.884443 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0614 15:22:14.915068 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0614 15:22:14.943507 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0614 15:22:14.970287 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0614 15:22:14.994146 7997 ssh_runner.go:316] scp /home/bvestey/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0614 15:22:15.024023 7997 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0614 15:22:15.041961 7997 ssh_runner.go:149] Run: openssl version
I0614 15:22:15.049439 7997 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0614 15:22:15.060091 7997 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0614 15:22:15.065897 7997 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun 14 22:22 /usr/share/ca-certificates/minikubeCA.pem
I0614 15:22:15.065931 7997 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0614 15:22:15.074384 7997 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0614 15:22:15.083374 7997 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:16000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0614 15:22:15.083500 7997 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0614 15:22:15.132459 7997 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0614 15:22:15.140010 7997 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0614 15:22:15.150789 7997 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0614 15:22:15.150848 7997 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0614 15:22:15.159683 7997 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0614 15:22:15.159725 7997 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0614 15:22:37.694032 7997 out.go:197] ▪ Generating certificates and keys ...
I0614 15:22:37.706301 7997 out.go:197] ▪ Booting up control plane ...
I0614 15:22:37.714490 7997 out.go:197] ▪ Configuring RBAC rules ...
I0614 15:22:37.717084 7997 cni.go:93] Creating CNI manager for ""
I0614 15:22:37.717092 7997 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0614 15:22:37.717121 7997 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0614 15:22:37.717165 7997 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0614 15:22:37.717227 7997 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0 minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_14T15_22_37_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0614 15:22:37.767241 7997 ops.go:34] apiserver oom_adj: -16
I0614 15:22:37.968890 7997 kubeadm.go:985] duration metric: took 251.764131ms to wait for elevateKubeSystemPrivileges.
I0614 15:22:38.684457 7997 kubeadm.go:392] StartCluster complete in 23.601076823s
I0614 15:22:38.684492 7997 settings.go:142] acquiring lock: {Name:mk8100ebaf37a2610f5bc3a973e1a8ff6863f1e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:38.684672 7997 settings.go:150] Updating kubeconfig: /home/bvestey/.kube/config
I0614 15:22:38.685440 7997 lock.go:36] WriteFile acquiring /home/bvestey/.kube/config: {Name:mk0c537c42244400af1a7a36472ce88877c38b66 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0614 15:22:39.209914 7997 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0614 15:22:39.209976 7997 start.go:214] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0614 15:22:39.210015 7997 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0614 15:22:39.221262 7997 out.go:170] 🔎 Verifying Kubernetes components...
I0614 15:22:39.210104 7997 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0614 15:22:39.221447 7997 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0614 15:22:39.221508 7997 addons.go:59] Setting default-storageclass=true in profile "minikube"
I0614 15:22:39.221448 7997 addons.go:59] Setting storage-provisioner=true in profile "minikube"
I0614 15:22:39.221549 7997 addons.go:135] Setting addon storage-provisioner=true in "minikube"
W0614 15:22:39.221565 7997 addons.go:147] addon storage-provisioner should already be in state true
I0614 15:22:39.221570 7997 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0614 15:22:39.221624 7997 host.go:66] Checking if "minikube" exists ...
I0614 15:22:39.222251 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0614 15:22:39.222444 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0614 15:22:39.284925 7997 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0614 15:22:39.280796 7997 addons.go:135] Setting addon default-storageclass=true in "minikube"
W0614 15:22:39.284992 7997 addons.go:147] addon default-storageclass should already be in state true
I0614 15:22:39.285011 7997 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0614 15:22:39.285017 7997 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0614 15:22:39.285024 7997 host.go:66] Checking if "minikube" exists ...
I0614 15:22:39.285062 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:39.285576 7997 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0614 15:22:39.312089 7997 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . /etc/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0614 15:22:39.315518 7997 api_server.go:50] waiting for apiserver process to appear ...
I0614 15:22:39.315571 7997 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0614 15:22:39.331131 7997 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0614 15:22:39.331145 7997 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0614 15:22:39.331213 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:39.337552 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:39.377115 7997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/bvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0614 15:22:39.474261 7997 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0614 15:22:39.554176 7997 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0614 15:22:39.864988 7997 start.go:725] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0614 15:22:39.864999 7997 api_server.go:70] duration metric: took 654.977356ms to wait for apiserver process to appear ...
I0614 15:22:39.865012 7997 api_server.go:86] waiting for apiserver healthz status ...
I0614 15:22:39.865021 7997 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0614 15:22:39.878327 7997 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
ok
I0614 15:22:39.879272 7997 api_server.go:139] control plane version: v1.20.7
I0614 15:22:39.879285 7997 api_server.go:129] duration metric: took 14.268382ms to wait for apiserver health ...
I0614 15:22:39.879293 7997 system_pods.go:43] waiting for kube-system pods to appear ...
I0614 15:22:39.888110 7997 system_pods.go:59] 0 kube-system pods found
I0614 15:22:39.888122 7997 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0614 15:22:40.057412 7997 out.go:170] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0614 15:22:40.057450 7997 addons.go:344] enableAddons completed in 847.365101ms
I0614 15:22:40.156198 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:40.156235 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:40.156248 7997 retry.go:31] will retry after 381.329545ms: only 1 pod(s) have shown up
I0614 15:22:40.542439 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:40.542485 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:40.542497 7997 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
I0614 15:22:40.969080 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:40.969106 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:40.969118 7997 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
I0614 15:22:41.447289 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:41.447328 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:41.447341 7997 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0614 15:22:42.040163 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:42.040192 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:42.040205 7997 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0614 15:22:42.879960 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:42.879989 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:42.880002 7997 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0614 15:22:43.632807 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:43.632835 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:43.632848 7997 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
I0614 15:22:44.623885 7997 system_pods.go:59] 1 kube-system pods found
I0614 15:22:44.623914 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:44.623927 7997 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up
I0614 15:22:45.824617 7997 system_pods.go:59] 5 kube-system pods found
I0614 15:22:45.824640 7997 system_pods.go:61] "etcd-minikube" [52e07c63-06e9-49cd-869f-3da6f9537340] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0614 15:22:45.824644 7997 system_pods.go:61] "kube-apiserver-minikube" [926c4538-82df-46b9-a279-c0cffcda212f] Pending
I0614 15:22:45.824649 7997 system_pods.go:61] "kube-controller-manager-minikube" [0c7e7f1c-35ba-411b-9ab4-3632c7dccf48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0614 15:22:45.824654 7997 system_pods.go:61] "kube-scheduler-minikube" [51b58079-8e77-4179-837c-c1eb65c7f4ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0614 15:22:45.824657 7997 system_pods.go:61] "storage-provisioner" [d5b86764-709b-4946-b092-53b446b1ab1c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0614 15:22:45.824662 7997 system_pods.go:74] duration metric: took 5.945364329s to wait for pod list to return data ...
I0614 15:22:45.824669 7997 kubeadm.go:547] duration metric: took 6.614651448s to wait for : map[apiserver:true system_pods:true] ...
I0614 15:22:45.824681 7997 node_conditions.go:102] verifying NodePressure condition ...
I0614 15:22:45.828552 7997 node_conditions.go:122] node storage ephemeral capacity is 245016792Ki
I0614 15:22:45.828569 7997 node_conditions.go:123] node cpu capacity is 8
I0614 15:22:45.828581 7997 node_conditions.go:105] duration metric: took 3.896063ms to run NodePressure ...
I0614 15:22:45.828588 7997 start.go:219] waiting for startup goroutines ...
I0614 15:22:46.154876 7997 start.go:463] kubectl: 1.21.1, cluster: 1.20.7 (minor skew: 1)
I0614 15:22:46.162112 7997 out.go:170] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

  • ==> Docker <==

  • -- Logs begin at Mon 2021-06-14 22:21:53 UTC, end at Mon 2021-06-14 22:24:18 UTC. --
    Jun 14 22:22:00 minikube dockerd[208]: time="2021-06-14T22:22:00.778946151Z" level=info msg="API listen on /run/docker.sock"
    Jun 14 22:22:02 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
    Jun 14 22:22:02 minikube systemd[1]: Stopping Docker Application Container Engine...
    Jun 14 22:22:02 minikube dockerd[208]: time="2021-06-14T22:22:02.944283817Z" level=info msg="Processing signal 'terminated'"
    Jun 14 22:22:02 minikube dockerd[208]: time="2021-06-14T22:22:02.947507116Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
    Jun 14 22:22:02 minikube dockerd[208]: time="2021-06-14T22:22:02.949175770Z" level=info msg="Daemon shutdown complete"
    Jun 14 22:22:02 minikube dockerd[208]: time="2021-06-14T22:22:02.949230926Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
    Jun 14 22:22:02 minikube systemd[1]: docker.service: Succeeded.
    Jun 14 22:22:02 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jun 14 22:22:02 minikube systemd[1]: Starting Docker Application Container Engine...
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.013130798Z" level=info msg="Starting up"
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.014873202Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.014913817Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.014951396Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.014979294Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.016356341Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.016396850Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.016426123Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.016442630Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.066172618Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.079985608Z" level=warning msg="Your kernel does not support swap memory limit"
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.080006019Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.080147592Z" level=info msg="Loading containers: start."
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.231164941Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.289804518Z" level=info msg="Loading containers: done."
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.334101253Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.334183406Z" level=info msg="Daemon has completed initialization"
    Jun 14 22:22:03 minikube systemd[1]: Started Docker Application Container Engine.
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.365035095Z" level=info msg="API listen on [::]:2376"
    Jun 14 22:22:03 minikube dockerd[456]: time="2021-06-14T22:22:03.371855124Z" level=info msg="API listen on /var/run/docker.sock"
    Jun 14 22:22:08 minikube systemd[1]: Stopping Docker Application Container Engine...
    Jun 14 22:22:08 minikube dockerd[456]: time="2021-06-14T22:22:08.245865534Z" level=info msg="Processing signal 'terminated'"
    Jun 14 22:22:08 minikube dockerd[456]: time="2021-06-14T22:22:08.247219797Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
    Jun 14 22:22:08 minikube dockerd[456]: time="2021-06-14T22:22:08.248034588Z" level=info msg="Daemon shutdown complete"
    Jun 14 22:22:08 minikube systemd[1]: docker.service: Succeeded.
    Jun 14 22:22:08 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jun 14 22:22:08 minikube systemd[1]: Starting Docker Application Container Engine...
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.314402036Z" level=info msg="Starting up"
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.316723046Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.316776165Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.316832875Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.316866313Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.318232750Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.318273094Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.318312343Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 14 22:22:08 minikube dockerd[743]: time="2021-06-14T22:22:08.318342493Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 14 22:22:12 minikube dockerd[743]: time="2021-06-14T22:22:12.831778175Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jun 14 22:22:12 minikube dockerd[743]: time="2021-06-14T22:22:12.842069230Z" level=warning msg="Your kernel does not support swap memory limit"
    Jun 14 22:22:12 minikube dockerd[743]: time="2021-06-14T22:22:12.842091549Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
    Jun 14 22:22:12 minikube dockerd[743]: time="2021-06-14T22:22:12.842249816Z" level=info msg="Loading containers: start."
    Jun 14 22:22:13 minikube dockerd[743]: time="2021-06-14T22:22:13.011156002Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jun 14 22:22:13 minikube dockerd[743]: time="2021-06-14T22:22:13.074479579Z" level=info msg="Loading containers: done."
    Jun 14 22:22:13 minikube dockerd[743]: time="2021-06-14T22:22:13.132425221Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
    Jun 14 22:22:13 minikube dockerd[743]: time="2021-06-14T22:22:13.132497032Z" level=info msg="Daemon has completed initialization"
    Jun 14 22:22:13 minikube systemd[1]: Started Docker Application Container Engine.
    Jun 14 22:22:13 minikube dockerd[743]: time="2021-06-14T22:22:13.164132564Z" level=info msg="API listen on [::]:2376"
    Jun 14 22:22:13 minikube dockerd[743]: time="2021-06-14T22:22:13.172262856Z" level=info msg="API listen on /var/run/docker.sock"
    Jun 14 22:22:57 minikube dockerd[743]: time="2021-06-14T22:22:57.103714348Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
    Jun 14 22:23:16 minikube dockerd[743]: time="2021-06-14T22:23:16.041419895Z" level=info msg="ignoring event" container=bd1660392b019c4531e1bcbde8e137179f62156f3c1bc248583b3f38c16fe03a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
    Jun 14 22:23:16 minikube dockerd[743]: time="2021-06-14T22:23:16.936277441Z" level=info msg="ignoring event" container=9089425e6cf08e49744db381bd4dba6396f67fd0489443a2e65fe18e35187814 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

  • ==> container status <==

  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
    82594b1a56c85 6e38f40d628db About a minute ago Running storage-provisioner 0 f85d359614ec7
    a1be3d17937f7 bfe3a36ebd252 About a minute ago Running coredns 0 f570ed491eff1
    fa17531180228 ff54c88b8ecfa About a minute ago Running kube-proxy 0 61ffffda13c20
    a1f516e6b190e 0369cf4303ffd About a minute ago Running etcd 0 e13ffb7173936
    274168f86e9fd 034671b24f0f1 About a minute ago Running kube-apiserver 0 57ea0fc63ba07
    a0f627a3448b9 38f903b540101 About a minute ago Running kube-scheduler 0 4ea6fb3451898
    53048cf3e6c3f 22d1a2072ec7b About a minute ago Running kube-controller-manager 0 283de5dfa39dc

  • ==> coredns [a1be3d17937f] <==

  • .:53
    [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
    CoreDNS-1.7.0
    linux/amd64, go1.14.4, f59c03d

  • ==> describe nodes <==

  • Name: minikube
    Roles: control-plane,master
    Labels: beta.kubernetes.io/arch=amd64
    beta.kubernetes.io/os=linux
    kubernetes.io/arch=amd64
    kubernetes.io/hostname=minikube
    kubernetes.io/os=linux
    minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11
    minikube.k8s.io/name=minikube
    minikube.k8s.io/updated_at=2021_06_14T15_22_37_0700
    minikube.k8s.io/version=v1.21.0
    node-role.kubernetes.io/control-plane=
    node-role.kubernetes.io/master=
    Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: 0
    volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp: Mon, 14 Jun 2021 22:22:34 +0000
    Taints:
    Unschedulable: false
    Lease:
    HolderIdentity: minikube
    AcquireTime:
    RenewTime: Mon, 14 Jun 2021 22:24:14 +0000
    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message


    MemoryPressure False Mon, 14 Jun 2021 22:23:44 +0000 Mon, 14 Jun 2021 22:22:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
    DiskPressure False Mon, 14 Jun 2021 22:23:44 +0000 Mon, 14 Jun 2021 22:22:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
    PIDPressure False Mon, 14 Jun 2021 22:23:44 +0000 Mon, 14 Jun 2021 22:22:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
    Ready True Mon, 14 Jun 2021 22:23:44 +0000 Mon, 14 Jun 2021 22:22:53 +0000 KubeletReady kubelet is posting ready status
    Addresses:
    InternalIP: 192.168.49.2
    Hostname: minikube
    Capacity:
    cpu: 8
    ephemeral-storage: 245016792Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 65893500Ki
    pods: 110
    Allocatable:
    cpu: 8
    ephemeral-storage: 245016792Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 65893500Ki
    pods: 110
    System Info:
    Machine ID: b77ec962e3734760b1e756ffc5e83152
    System UUID: c8f91035-7938-4aae-8354-1a06f6988146
    Boot ID: d5220a36-788d-49b4-8c7f-d4308b76b680
    Kernel Version: 4.15.0-62-generic
    OS Image: Ubuntu 20.04.2 LTS
    Operating System: linux
    Architecture: amd64
    Container Runtime Version: docker://20.10.7
    Kubelet Version: v1.20.7
    Kube-Proxy Version: v1.20.7
    PodCIDR: 10.244.0.0/24
    PodCIDRs: 10.244.0.0/24
    Non-terminated Pods: (7 in total)
    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


    kube-system coredns-74ff55c5b-wppnz 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 86s
    kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 94s
    kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 94s
    kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 94s
    kube-system kube-proxy-wg624 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 85s
    kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 94s
    kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 98s
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    Resource Requests Limits


    cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
    memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
    ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    Events:
    Type Reason Age From Message


    Normal NodeHasSufficientMemory 111s (x5 over 112s) kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 111s (x5 over 112s) kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 111s (x4 over 112s) kubelet Node minikube status is now: NodeHasSufficientPID
    Normal Starting 94s kubelet Starting kubelet.
    Normal NodeHasSufficientMemory 94s kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 94s kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 94s kubelet Node minikube status is now: NodeHasSufficientPID
    Normal NodeAllocatableEnforced 94s kubelet Updated Node Allocatable limit across pods
    Normal NodeReady 85s kubelet Node minikube status is now: NodeReady
    Normal Starting 84s kube-proxy Starting kube-proxy.

  • ==> dmesg <==

  • [ +0.000006] acpi LNXCPU:5d: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:5e: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:5f: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:60: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:61: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:62: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:63: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:64: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:65: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:66: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:67: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:68: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:69: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:6a: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:6b: Failed to get unique processor _UID (0xff)
    [ +0.000015] acpi LNXCPU:6c: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:6d: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:6e: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:6f: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:70: Failed to get unique processor _UID (0xff)
    [ +0.000008] acpi LNXCPU:71: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:72: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:73: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:74: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:75: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:76: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:77: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:78: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:79: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:7a: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:7b: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:7c: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:7d: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:7e: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:7f: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:80: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:81: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:82: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:83: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:84: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:85: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:86: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:87: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:88: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:89: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:8a: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:8b: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:8c: Failed to get unique processor _UID (0xff)
    [ +0.000007] acpi LNXCPU:8d: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:8e: Failed to get unique processor _UID (0xff)
    [ +0.000006] acpi LNXCPU:8f: Failed to get unique processor _UID (0xff)
    [ +4.427750] ata5.00: NCQ Send/Recv Log not supported
    [ +0.073600] ata5.00: NCQ Send/Recv Log not supported
    [ +2.987291] nouveau 0000:02:00.0: bus: MMIO read of 00000000 FAULT at 3e6684 [ IBUS ]
    [ +0.008725] nouveau 0000:02:00.0: bus: MMIO read of 00000000 FAULT at 10ac08 [ IBUS ]
    [ +8.559082] kauditd_printk_skb: 11 callbacks suppressed
    [ +21.228583] kauditd_printk_skb: 14 callbacks suppressed
    [ +5.525638] kauditd_printk_skb: 31 callbacks suppressed
    [Jun14 22:21] kauditd_printk_skb: 4 callbacks suppressed
    [Jun14 22:22] kauditd_printk_skb: 5 callbacks suppressed

  • ==> etcd [a1f516e6b190] <==

  • [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
    2021-06-14 22:22:28.798606 I | etcdmain: etcd Version: 3.4.13
    2021-06-14 22:22:28.798657 I | etcdmain: Git SHA: ae9734ed2
    2021-06-14 22:22:28.798662 I | etcdmain: Go Version: go1.12.17
    2021-06-14 22:22:28.798665 I | etcdmain: Go OS/Arch: linux/amd64
    2021-06-14 22:22:28.798673 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
    2021-06-14 22:22:28.798789 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
    2021-06-14 22:22:28.799659 I | embed: name = minikube
    2021-06-14 22:22:28.799675 I | embed: data dir = /var/lib/minikube/etcd
    2021-06-14 22:22:28.799682 I | embed: member dir = /var/lib/minikube/etcd/member
    2021-06-14 22:22:28.799688 I | embed: heartbeat = 100ms
    2021-06-14 22:22:28.799693 I | embed: election = 1000ms
    2021-06-14 22:22:28.799699 I | embed: snapshot count = 10000
    2021-06-14 22:22:28.799710 I | embed: advertise client URLs = https://192.168.49.2:2379
    2021-06-14 22:22:28.875038 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be
    raft2021/06/14 22:22:28 INFO: aec36adc501070cc switched to configuration voters=()
    raft2021/06/14 22:22:28 INFO: aec36adc501070cc became follower at term 0
    raft2021/06/14 22:22:28 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
    raft2021/06/14 22:22:28 INFO: aec36adc501070cc became follower at term 1
    raft2021/06/14 22:22:28 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
    2021-06-14 22:22:28.889380 W | auth: simple token is not cryptographically signed
    2021-06-14 22:22:28.955350 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
    2021-06-14 22:22:28.955539 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
    raft2021/06/14 22:22:28 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
    2021-06-14 22:22:28.956238 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
    2021-06-14 22:22:28.958285 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
    2021-06-14 22:22:28.958401 I | embed: listening for peers on 192.168.49.2:2380
    2021-06-14 22:22:28.958501 I | embed: listening for metrics on http://127.0.0.1:2381
    raft2021/06/14 22:22:29 INFO: aec36adc501070cc is starting a new election at term 1
    raft2021/06/14 22:22:29 INFO: aec36adc501070cc became candidate at term 2
    raft2021/06/14 22:22:29 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
    raft2021/06/14 22:22:29 INFO: aec36adc501070cc became leader at term 2
    raft2021/06/14 22:22:29 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
    2021-06-14 22:22:29.476023 I | etcdserver: setting up the initial cluster version to 3.4
    2021-06-14 22:22:29.480090 N | etcdserver/membership: set the initial cluster version to 3.4
    2021-06-14 22:22:29.480167 I | etcdserver/api: enabled capabilities for version 3.4
    2021-06-14 22:22:29.480180 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
    2021-06-14 22:22:29.480192 I | embed: ready to serve client requests
    2021-06-14 22:22:29.480276 I | embed: ready to serve client requests
    2021-06-14 22:22:29.482672 I | embed: serving client requests on 192.168.49.2:2379
    2021-06-14 22:22:29.482942 I | embed: serving client requests on 127.0.0.1:2379
    2021-06-14 22:22:52.808502 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:23:00.740910 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:23:10.740871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:23:20.741021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:23:30.741071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:23:40.740914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:23:50.740932 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:24:00.740900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-14 22:24:10.740948 I | etcdserver/api/etcdhttp: /health OK (status code 200)

  • ==> kernel <==

  • 22:24:18 up 4 min, 0 users, load average: 1.49, 1.83, 0.86
    Linux minikube 4.15.0-62-generic Labels for this repo #69-Ubuntu SMP Wed Sep 4 20:55:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
    PRETTY_NAME="Ubuntu 20.04.2 LTS"

  • ==> kube-apiserver [274168f86e9f] <==

  • I0614 22:22:34.345549 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
    I0614 22:22:34.345687 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
    I0614 22:22:34.345988 1 secure_serving.go:197] Serving securely on [::]:8443
    I0614 22:22:34.346017 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
    I0614 22:22:34.346082 1 tlsconfig.go:240] Starting DynamicServingCertificateController
    I0614 22:22:34.346196 1 customresource_discovery_controller.go:209] Starting DiscoveryController
    I0614 22:22:34.346484 1 controller.go:83] Starting OpenAPI AggregationController
    I0614 22:22:34.346647 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
    I0614 22:22:34.346678 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
    I0614 22:22:34.346730 1 available_controller.go:475] Starting AvailableConditionController
    I0614 22:22:34.351806 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
    I0614 22:22:34.351869 1 autoregister_controller.go:141] Starting autoregister controller
    I0614 22:22:34.351882 1 cache.go:32] Waiting for caches to sync for autoregister controller
    I0614 22:22:34.347320 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
    I0614 22:22:34.351908 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
    I0614 22:22:34.351940 1 crdregistration_controller.go:111] Starting crd-autoregister controller
    I0614 22:22:34.351955 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
    I0614 22:22:34.347372 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
    I0614 22:22:34.347396 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
    I0614 22:22:34.357393 1 apf_controller.go:261] Starting API Priority and Fairness config controller
    I0614 22:22:34.367820 1 controller.go:86] Starting OpenAPI controller
    I0614 22:22:34.367849 1 naming_controller.go:291] Starting NamingConditionController
    I0614 22:22:34.367865 1 establishing_controller.go:76] Starting EstablishingController
    I0614 22:22:34.367880 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
    I0614 22:22:34.367898 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
    I0614 22:22:34.367917 1 crd_finalizer.go:266] Starting CRDFinalizer
    E0614 22:22:34.377784 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg:
    I0614 22:22:34.452320 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
    I0614 22:22:34.464754 1 controller.go:609] quota admission added evaluator for: namespaces
    I0614 22:22:34.547704 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
    I0614 22:22:34.551876 1 cache.go:39] Caches are synced for AvailableConditionController controller
    I0614 22:22:34.551966 1 cache.go:39] Caches are synced for autoregister controller
    I0614 22:22:34.553681 1 shared_informer.go:247] Caches are synced for crd-autoregister
    I0614 22:22:34.553861 1 shared_informer.go:247] Caches are synced for node_authorizer
    I0614 22:22:34.557503 1 apf_controller.go:266] Running API Priority and Fairness config worker
    I0614 22:22:35.345585 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
    I0614 22:22:35.345779 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
    I0614 22:22:35.352904 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
    I0614 22:22:35.357102 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
    I0614 22:22:35.357129 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
    I0614 22:22:35.992101 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
    I0614 22:22:36.051358 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
    W0614 22:22:36.204656 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
    I0614 22:22:36.205451 1 controller.go:609] quota admission added evaluator for: endpoints
    I0614 22:22:36.214571 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
    I0614 22:22:36.930833 1 controller.go:609] quota admission added evaluator for: serviceaccounts
    I0614 22:22:37.548905 1 controller.go:609] quota admission added evaluator for: deployments.apps
    I0614 22:22:37.671812 1 controller.go:609] quota admission added evaluator for: daemonsets.apps
    I0614 22:22:44.170816 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
    I0614 22:22:52.900576 1 controller.go:609] quota admission added evaluator for: replicasets.apps
    I0614 22:22:53.083668 1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
    I0614 22:23:00.512714 1 client.go:360] parsed scheme: "passthrough"
    I0614 22:23:00.512783 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0614 22:23:00.512812 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0614 22:23:36.457985 1 client.go:360] parsed scheme: "passthrough"
    I0614 22:23:36.458050 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0614 22:23:36.458066 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0614 22:24:14.287520 1 client.go:360] parsed scheme: "passthrough"
    I0614 22:24:14.287585 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0614 22:24:14.287600 1 clientconn.go:948] ClientConn switching balancer to "pick_first"

  • ==> kube-controller-manager [53048cf3e6c3] <==

  • I0614 22:22:52.569305 1 controllermanager.go:554] Started "attachdetach"
    W0614 22:22:52.569327 1 controllermanager.go:546] Skipping "ephemeral-volume"
    I0614 22:22:52.569364 1 attach_detach_controller.go:328] Starting attach detach controller
    I0614 22:22:52.569375 1 shared_informer.go:240] Waiting for caches to sync for attach detach
    I0614 22:22:52.815349 1 controllermanager.go:554] Started "endpointslice"
    I0614 22:22:52.816136 1 endpointslice_controller.go:237] Starting endpoint slice controller
    I0614 22:22:52.816172 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
    I0614 22:22:52.826425 1 shared_informer.go:247] Caches are synced for ReplicationController
    I0614 22:22:52.833413 1 shared_informer.go:247] Caches are synced for job
    I0614 22:22:52.843897 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
    I0614 22:22:52.855020 1 shared_informer.go:247] Caches are synced for expand
    I0614 22:22:52.865290 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
    I0614 22:22:52.876248 1 shared_informer.go:247] Caches are synced for disruption
    I0614 22:22:52.876283 1 disruption.go:339] Sending events to api server.
    I0614 22:22:52.876917 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
    I0614 22:22:52.893187 1 shared_informer.go:247] Caches are synced for PV protection
    I0614 22:22:52.896892 1 shared_informer.go:247] Caches are synced for deployment
    I0614 22:22:52.913038 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
    I0614 22:22:52.913523 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
    I0614 22:22:52.914193 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
    I0614 22:22:52.914527 1 shared_informer.go:247] Caches are synced for HPA
    I0614 22:22:52.914728 1 shared_informer.go:247] Caches are synced for stateful set
    I0614 22:22:52.915155 1 shared_informer.go:247] Caches are synced for endpoint
    I0614 22:22:52.916333 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
    I0614 22:22:52.916368 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
    I0614 22:22:52.917360 1 shared_informer.go:247] Caches are synced for ReplicaSet
    I0614 22:22:52.926694 1 shared_informer.go:247] Caches are synced for PVC protection
    I0614 22:22:52.938442 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-wppnz"
    I0614 22:22:52.956004 1 shared_informer.go:247] Caches are synced for namespace
    I0614 22:22:52.976237 1 shared_informer.go:247] Caches are synced for service account
    W0614 22:22:53.017530 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
    I0614 22:22:53.026948 1 shared_informer.go:247] Caches are synced for bootstrap_signer
    I0614 22:22:53.027013 1 shared_informer.go:247] Caches are synced for persistent volume
    I0614 22:22:53.049684 1 shared_informer.go:247] Caches are synced for taint
    I0614 22:22:53.049843 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
    I0614 22:22:53.049843 1 taint_manager.go:187] Starting NoExecuteTaintManager
    W0614 22:22:53.049949 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
    I0614 22:22:53.050028 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
    I0614 22:22:53.050119 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
    I0614 22:22:53.055806 1 shared_informer.go:247] Caches are synced for crt configmap
    I0614 22:22:53.065087 1 shared_informer.go:247] Caches are synced for TTL
    I0614 22:22:53.069502 1 shared_informer.go:247] Caches are synced for attach detach
    I0614 22:22:53.074809 1 shared_informer.go:247] Caches are synced for GC
    I0614 22:22:53.076859 1 shared_informer.go:247] Caches are synced for daemon sets
    I0614 22:22:53.092726 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wg624"
    I0614 22:22:53.111008 1 shared_informer.go:247] Caches are synced for node
    I0614 22:22:53.111043 1 range_allocator.go:172] Starting range CIDR allocator
    I0614 22:22:53.111050 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
    I0614 22:22:53.111057 1 shared_informer.go:247] Caches are synced for cidrallocator
    I0614 22:22:53.116362 1 shared_informer.go:247] Caches are synced for endpoint_slice
    I0614 22:22:53.119630 1 shared_informer.go:247] Caches are synced for resource quota
    I0614 22:22:53.121988 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
    I0614 22:22:53.273404 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
    I0614 22:22:53.573639 1 shared_informer.go:247] Caches are synced for garbage collector
    I0614 22:22:53.613630 1 shared_informer.go:247] Caches are synced for garbage collector
    I0614 22:22:53.613672 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
    I0614 22:22:53.665609 1 request.go:655] Throttling request took 1.048631096s, request: GET:https://192.168.49.2:8443/apis/events.k8s.io/v1?timeout=32s
    I0614 22:22:54.467391 1 shared_informer.go:240] Waiting for caches to sync for resource quota
    I0614 22:22:54.467449 1 shared_informer.go:247] Caches are synced for resource quota
    I0614 22:22:58.050393 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.

  • ==> kube-proxy [fa1753118022] <==

  • I0614 22:22:54.300724 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
    I0614 22:22:54.300825 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
    W0614 22:22:54.337414 1 server_others.go:584] Unknown proxy mode "", assuming iptables proxy
    I0614 22:22:54.337660 1 server_others.go:185] Using iptables Proxier.
    I0614 22:22:54.338079 1 server.go:650] Version: v1.20.7
    I0614 22:22:54.338975 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
    I0614 22:22:54.339088 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
    I0614 22:22:54.339362 1 config.go:315] Starting service config controller
    I0614 22:22:54.339374 1 config.go:224] Starting endpoint slice config controller
    I0614 22:22:54.339400 1 shared_informer.go:240] Waiting for caches to sync for service config
    I0614 22:22:54.339413 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
    I0614 22:22:54.439671 1 shared_informer.go:247] Caches are synced for endpoint slice config
    I0614 22:22:54.440642 1 shared_informer.go:247] Caches are synced for service config

  • ==> kube-scheduler [a0f627a3448b] <==

  • I0614 22:22:30.071968 1 serving.go:331] Generated self-signed cert in-memory
    W0614 22:22:34.465834 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
    W0614 22:22:34.465850 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
    W0614 22:22:34.465858 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
    W0614 22:22:34.465863 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
    I0614 22:22:34.562057 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0614 22:22:34.562081 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0614 22:22:34.562360 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
    I0614 22:22:34.562408 1 tlsconfig.go:240] Starting DynamicServingCertificateController
    E0614 22:22:34.563685 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    E0614 22:22:34.564493 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
    E0614 22:22:34.564523 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
    E0614 22:22:34.564553 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
    E0614 22:22:34.564596 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
    E0614 22:22:34.565019 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
    E0614 22:22:34.565245 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
    E0614 22:22:34.565354 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
    E0614 22:22:34.565521 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
    E0614 22:22:34.565687 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
    E0614 22:22:34.565745 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
    E0614 22:22:34.565927 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    E0614 22:22:35.399293 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
    E0614 22:22:35.531373 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
    E0614 22:22:35.577818 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
    E0614 22:22:35.579558 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
    E0614 22:22:35.646821 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
    E0614 22:22:35.926291 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    I0614 22:22:38.362251 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

  • ==> kubelet <==

  • -- Logs begin at Mon 2021-06-14 22:21:53 UTC, end at Mon 2021-06-14 22:24:18 UTC. --
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.356976 2583 kubelet_node_status.go:74] Successfully registered node minikube
    Jun 14 22:22:44 minikube kubelet[2583]: E0614 22:22:44.478052 2583 kubelet.go:1852] skipping pod synchronization - container runtime status check may not have completed yet
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.494979 2583 cpu_manager.go:193] [cpumanager] starting with none policy
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.494995 2583 cpu_manager.go:194] [cpumanager] reconciling every 10s
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.495015 2583 state_mem.go:36] [cpumanager] initializing new in-memory state store
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.495169 2583 state_mem.go:88] [cpumanager] updated default cpuset: ""
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.495179 2583 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.495193 2583 policy_none.go:43] [cpumanager] none policy: Start
    Jun 14 22:22:44 minikube kubelet[2583]: W0614 22:22:44.496307 2583 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.496719 2583 plugin_manager.go:114] Starting Kubelet Plugin Manager
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.878321 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.878482 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.878570 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:44 minikube kubelet[2583]: I0614 22:22:44.878654 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051501 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-ca-certs") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051563 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-k8s-certs") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051610 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051703 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-kubeconfig") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051805 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051890 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051933 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.051964 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/c31fe6a5afdd142cf3450ac972274b36-etcd-data") pod "etcd-minikube" (UID: "c31fe6a5afdd142cf3450ac972274b36")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052033 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/82ed17c7f4a56a29330619386941d47e-kubeconfig") pod "kube-scheduler-minikube" (UID: "82ed17c7f4a56a29330619386941d47e")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052154 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/c31fe6a5afdd142cf3450ac972274b36-etcd-certs") pod "etcd-minikube" (UID: "c31fe6a5afdd142cf3450ac972274b36")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052261 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052333 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-ca-certs") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052407 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052468 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-k8s-certs") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052508 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 14 22:22:45 minikube kubelet[2583]: I0614 22:22:45.052542 2583 reconciler.go:157] Reconciler: start to sync state
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.098097 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.162924 2583 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.163317 2583 docker_service.go:358] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.163560 2583 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.269637 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0259bd2f-05b9-4196-99e0-9991d93ec2ec-kube-proxy") pod "kube-proxy-wg624" (UID: "0259bd2f-05b9-4196-99e0-9991d93ec2ec")
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.269712 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/0259bd2f-05b9-4196-99e0-9991d93ec2ec-lib-modules") pod "kube-proxy-wg624" (UID: "0259bd2f-05b9-4196-99e0-9991d93ec2ec")
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.269787 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/0259bd2f-05b9-4196-99e0-9991d93ec2ec-xtables-lock") pod "kube-proxy-wg624" (UID: "0259bd2f-05b9-4196-99e0-9991d93ec2ec")
    Jun 14 22:22:53 minikube kubelet[2583]: I0614 22:22:53.269850 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-j82fr" (UniqueName: "kubernetes.io/secret/0259bd2f-05b9-4196-99e0-9991d93ec2ec-kube-proxy-token-j82fr") pod "kube-proxy-wg624" (UID: "0259bd2f-05b9-4196-99e0-9991d93ec2ec")
    Jun 14 22:22:55 minikube kubelet[2583]: I0614 22:22:55.781117 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:55 minikube kubelet[2583]: I0614 22:22:55.976554 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8db10bff-596c-418e-8f7f-3eb0fe2674b0-config-volume") pod "coredns-74ff55c5b-wppnz" (UID: "8db10bff-596c-418e-8f7f-3eb0fe2674b0")
    Jun 14 22:22:55 minikube kubelet[2583]: I0614 22:22:55.976641 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-s87rz" (UniqueName: "kubernetes.io/secret/8db10bff-596c-418e-8f7f-3eb0fe2674b0-coredns-token-s87rz") pod "coredns-74ff55c5b-wppnz" (UID: "8db10bff-596c-418e-8f7f-3eb0fe2674b0")
    Jun 14 22:22:57 minikube kubelet[2583]: W0614 22:22:57.094618 2583 pod_container_deletor.go:79] Container "f570ed491eff1e9810cf13b528dcda52d8c99571747f4f87d31797c1b09d1284" not found in pod's containers
    Jun 14 22:22:57 minikube kubelet[2583]: W0614 22:22:57.095745 2583 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-wppnz through plugin: invalid network status for
    Jun 14 22:22:57 minikube kubelet[2583]: I0614 22:22:57.773611 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:22:57 minikube kubelet[2583]: I0614 22:22:57.881088 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/d5b86764-709b-4946-b092-53b446b1ab1c-tmp") pod "storage-provisioner" (UID: "d5b86764-709b-4946-b092-53b446b1ab1c")
    Jun 14 22:22:57 minikube kubelet[2583]: I0614 22:22:57.881162 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-2v9ql" (UniqueName: "kubernetes.io/secret/d5b86764-709b-4946-b092-53b446b1ab1c-storage-provisioner-token-2v9ql") pod "storage-provisioner" (UID: "d5b86764-709b-4946-b092-53b446b1ab1c")
    Jun 14 22:22:58 minikube kubelet[2583]: W0614 22:22:58.102991 2583 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-wppnz through plugin: invalid network status for
    Jun 14 22:23:09 minikube kubelet[2583]: I0614 22:23:09.234085 2583 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 14 22:23:09 minikube kubelet[2583]: I0614 22:23:09.411383 2583 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-t5t45" (UniqueName: "kubernetes.io/secret/ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e-default-token-t5t45") pod "busybox" (UID: "ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e")
    Jun 14 22:23:10 minikube kubelet[2583]: W0614 22:23:10.509698 2583 pod_container_deletor.go:79] Container "9089425e6cf08e49744db381bd4dba6396f67fd0489443a2e65fe18e35187814" not found in pod's containers
    Jun 14 22:23:10 minikube kubelet[2583]: W0614 22:23:10.510373 2583 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 14 22:23:11 minikube kubelet[2583]: W0614 22:23:11.519656 2583 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 14 22:23:15 minikube kubelet[2583]: W0614 22:23:15.581696 2583 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 14 22:23:16 minikube kubelet[2583]: W0614 22:23:16.810970 2583 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
    Jun 14 22:23:16 minikube kubelet[2583]: I0614 22:23:16.958691 2583 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-t5t45" (UniqueName: "kubernetes.io/secret/ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e-default-token-t5t45") pod "ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e" (UID: "ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e")
    Jun 14 22:23:16 minikube kubelet[2583]: I0614 22:23:16.973955 2583 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e-default-token-t5t45" (OuterVolumeSpecName: "default-token-t5t45") pod "ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e" (UID: "ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e"). InnerVolumeSpecName "default-token-t5t45". PluginName "kubernetes.io/secret", VolumeGidValue ""
    Jun 14 22:23:17 minikube kubelet[2583]: I0614 22:23:17.059062 2583 reconciler.go:319] Volume detached for volume "default-token-t5t45" (UniqueName: "kubernetes.io/secret/ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e-default-token-t5t45") on node "minikube" DevicePath ""
    Jun 14 22:23:17 minikube kubelet[2583]: W0614 22:23:17.830715 2583 pod_container_deletor.go:79] Container "9089425e6cf08e49744db381bd4dba6396f67fd0489443a2e65fe18e35187814" not found in pod's containers
    Jun 14 22:23:18 minikube kubelet[2583]: W0614 22:23:18.190571 2583 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/ceb3e7c8-c6c7-444b-aeb0-1d4ab0a3870e/volumes" does not exist
    Jun 14 22:23:44 minikube kubelet[2583]: I0614 22:23:44.160439 2583 scope.go:111] [topologymanager] RemoveContainer - Container ID: bd1660392b019c4531e1bcbde8e137179f62156f3c1bc248583b3f38c16fe03a

  • ==> storage-provisioner [82594b1a56c8] <==

  • I0614 22:22:58.947169 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
    I0614 22:22:58.963094 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
    I0614 22:22:58.963129 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
    I0614 22:22:58.970099 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
    I0614 22:22:58.970237 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_e3322a55-3e2c-480a-820b-dbd7fd3649cb!
    I0614 22:22:58.970179 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f1c9fbe-b993-4afe-9ece-0abdcd303072", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_e3322a55-3e2c-480a-820b-dbd7fd3649cb became leader
    I0614 22:22:59.070713 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_e3322a55-3e2c-480a-820b-dbd7fd3649cb!

Failing command

╰─➤ kubectl run busybox --image=ubuntu --rm -ti --restart=Never --command -- bash -c "apt-get update && apt-get install -y iputils-ping && ping -c 3 google.com" Ign:1 https://security.ubuntu.com/ubuntu focal-security InRelease Err:5 https://security.ubuntu.com/ubuntu focal-security Release Could not handshake: A TLS fatal alert has been received. [IP: 174.21.177.73 443] Ign:2 https://archive.ubuntu.com/ubuntu focal InRelease Ign:3 https://archive.ubuntu.com/ubuntu focal-updates InRelease Ign:4 https://archive.ubuntu.com/ubuntu focal-backports InRelease Err:6 https://archive.ubuntu.com/ubuntu focal Release Could not handshake: A TLS fatal alert has been received. [IP: 174.21.177.73 443] Err:7 https://archive.ubuntu.com/ubuntu focal-updates Release Could not handshake: A TLS fatal alert has been received. [IP: 174.21.177.73 443] Err:8 https://archive.ubuntu.com/ubuntu focal-backports Release Could not handshake: A TLS fatal alert has been received. [IP: 174.21.177.73 443] Reading package lists... Done W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: No system certificates available. Try installing ca-certificates. W: http://security.ubuntu.com/ubuntu/dists/focal-security/Release: No system certificates available. Try installing ca-certificates. W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: No system certificates available. Try installing ca-certificates. E: The repository 'http://security.ubuntu.com/ubuntu focal-security Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: No system certificates available. Try installing ca-certificates. W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: No system certificates available. Try installing ca-certificates. W: http://archive.ubuntu.com/ubuntu/dists/focal/Release: No system certificates available. Try installing ca-certificates. E: The repository 'http://archive.ubuntu.com/ubuntu focal Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/Release: No system certificates available. Try installing ca-certificates. E: The repository 'http://archive.ubuntu.com/ubuntu focal-updates Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/Release: No system certificates available. Try installing ca-certificates. E: The repository 'http://archive.ubuntu.com/ubuntu focal-backports Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. pod "busybox" deleted pod default/busybox terminated (Error)

@spowelljr spowelljr added the kind/support Categorizes issue or PR as a support question. label Jun 15, 2021
@alexio777
Copy link

I have the same issue:

Ubuntu 20.04.2 LTS
Fresh install of minikube version: v1.21.0
commit: 76d7419

kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 3 google.com

Details

64 bytes from 142.250.181.46: seq=1 ttl=113 time=5.371 ms 64 bytes from 142.250.181.46: seq=2 ttl=113 time=5.089 ms

--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 5.089/6.048/7.685 ms
pod "busybox" deleted

kubectl run busybox --image=ubuntu --rm -ti --restart=Never --command -- bash -c "apt-get update && apt-get install -y iputils-ping && ping -c 3 google.com"

Details

If you don't see a command prompt, try pressing enter. Get:2 http://archive.ubuntu.com/ubuntu focal InRelease [51 B] Err:2 http://archive.ubuntu.com/ubuntu focal InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Get:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease [51 B] Err:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Get:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease [51 B] Err:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Reading package lists... Done N: See apt-secure(8) manpage for repository creation and user configuration details. N: Updating from such a repository can't be done securely, and is therefore disabled by default. E: The repository 'http://security.ubuntu.com/ubuntu focal-security InRelease' is not signed. E: Failed to fetch http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) N: See apt-secure(8) manpage for repository creation and user configuration details. N: Updating from such a repository can't be done securely, and is therefore disabled by default. E: The repository 'http://archive.ubuntu.com/ubuntu focal InRelease' is not signed. E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) E: The repository 'http://archive.ubuntu.com/ubuntu focal-updates InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) E: The repository 'http://archive.ubuntu.com/ubuntu focal-backports InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. pod "busybox" deleted pod default/busybox terminated (Error)

@bzvestey
Copy link
Author

I have done some testing of this issue on my side today, and think I have another important part of this. The internal domain name also needs to be in the search line of the resolv.conf file on the host machine.

So from my understanding so far, these two things are important to my issue:

  1. Networks internal domain name needs to have a dns record, like bzvestey.com
  2. The same domain needs to show up in the search line of the resolv.conf file on the host machine.

Example resolv.conf file:

search bzvestey.com
nameserver 1.1.1.1

@sharifelgamal sharifelgamal added area/dns DNS issues kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jul 14, 2021
@medyagh
Copy link
Member

medyagh commented Aug 11, 2021

I have done some testing of this issue on my side today, and think I have another important part of this. The internal domain name also needs to be in the search line of the resolv.conf file on the host machine.

So from my understanding so far, these two things are important to my issue:

  1. Networks internal domain name needs to have a dns record, like bzvestey.com
  2. The same domain needs to show up in the search line of the resolv.conf file on the host machine.

Example resolv.conf file:

search bzvestey.com
nameserver 1.1.1.1

@bzvestey that sounds reasonable ! I would accept a PR that would improve this !

@bzvestey
Copy link
Author

bzvestey commented Oct 9, 2021

@medyagh I have starting looking into this issue more and have hit a bit of a road block. From my digging into the code the entrypoint file linked below is the one responsible for setting up the resolve.conf file, but I don't know where to see the information that this file echo's out. Please correct me if I am wrong, but it seems that I have to build the minikube iso to test this?

The below line returns my external IP Address:

docker_host_ip="$( (head -n1 <(getent ahostsv4 'host.docker.internal') | cut -d' ' -f1) || true)"

If you have any input for what I can do to test this, that would be awesome.

Note: For those just looking for a work around to this issue, you can File Sync to add a custom resolv.conf.

@spowelljr
Copy link
Member

Hi @bzvestey, if you're modifying the entrypoint file you'd be building the kicbase image, to test the change locally you can run make local-kicbase then make after to recompile the minikube binary. Then just start the recompiled minikube binary and can test it from there.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 17, 2022
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Apr 13, 2022
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Aug 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dns DNS issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

8 participants