-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minikube uses internal network domain name DNS information for setting name server IP. #11644
Comments
@bzvestey do u have this problem only on arch linux ? |
For the information above I was specifically using Manjaro (downstream of Arch), in case that helps. I have tested three other configurations today and this is the results:
Minikube logs
*
* ==> Audit <==
* |--------------|------|----------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|--------------|------|----------|---------|---------|-------------------------------|-------------------------------|
| update-check | | minikube | bvestey | v1.20.0 | Mon, 24 May 2021 11:24:38 PDT | Mon, 24 May 2021 11:24:38 PDT |
| update-check | | minikube | bvestey | v1.20.0 | Thu, 10 Jun 2021 15:24:32 PDT | Thu, 10 Jun 2021 15:24:32 PDT |
| start | | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:21:14 PDT | Mon, 14 Jun 2021 15:22:46 PDT |
| ssh | | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:22:50 PDT | Mon, 14 Jun 2021 15:22:58 PDT |
| help | | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:23:50 PDT | Mon, 14 Jun 2021 15:23:50 PDT |
| help | logs | minikube | bvestey | v1.21.0 | Mon, 14 Jun 2021 15:24:01 PDT | Mon, 14 Jun 2021 15:24:01 PDT |
|--------------|------|----------|---------|---------|-------------------------------|-------------------------------|
stderr: -- /stdout -- ** /stderr ** I0614 15:22:01.068060 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0614 15:22:01.267891 7997 main.go:128] libmachine: SSH cmd err, output: : I0614 15:22:02.145924 7997 ubuntu.go:71] root file system type: overlay [Service] This file is a systemd drop-in unit that inherits from the base dockerd configuration.The base configuration already specifies an 'ExecStart=...' command. The first directivehere is to clear out that command inherited from the base configuration. Without this,the command from the base configuration and the command specified here are treated asa sequence of commands, which is not the desired behavior, nor is it valid -- systemdwill catch this invalid input and refuse to start the service with an error like:Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with othercontainer runtimes. If left unlimited, it may result in OOM issues with MySQL.ExecStart= Having non-zero Limit*s causes performance problems due to accounting overheadin the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinity Uncomment TasksMax if your systemd version supports it.Only systemd 226 and above support this version.TasksMax=infinity set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes kill only the docker process, not all processes in the cgroupKillMode=process [Install] [Service] This file is a systemd drop-in unit that inherits from the base dockerd configuration.The base configuration already specifies an 'ExecStart=...' command. The first directivehere is to clear out that command inherited from the base configuration. Without this,the command from the base configuration and the command specified here are treated asa sequence of commands, which is not the desired behavior, nor is it valid -- systemdwill catch this invalid input and refuse to start the service with an error like:Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with othercontainer runtimes. If left unlimited, it may result in OOM issues with MySQL.ExecStart= Having non-zero Limit*s causes performance problems due to accounting overheadin the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinity Uncomment TasksMax if your systemd version supports it.Only systemd 226 and above support this version.TasksMax=infinity set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes kill only the docker process, not all processes in the cgroupKillMode=process [Install] I0614 15:22:02.338923 7997 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube [Service]
|
I have the same issue: Ubuntu 20.04.2 LTS kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 3 google.com Details64 bytes from 142.250.181.46: seq=1 ttl=113 time=5.371 ms 64 bytes from 142.250.181.46: seq=2 ttl=113 time=5.089 ms --- google.com ping statistics --- kubectl run busybox --image=ubuntu --rm -ti --restart=Never --command -- bash -c "apt-get update && apt-get install -y iputils-ping && ping -c 3 google.com" DetailsIf you don't see a command prompt, try pressing enter. Get:2 http://archive.ubuntu.com/ubuntu focal InRelease [51 B] Err:2 http://archive.ubuntu.com/ubuntu focal InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Get:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease [51 B] Err:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Get:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease [51 B] Err:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Reading package lists... Done N: See apt-secure(8) manpage for repository creation and user configuration details. N: Updating from such a repository can't be done securely, and is therefore disabled by default. E: The repository 'http://security.ubuntu.com/ubuntu focal-security InRelease' is not signed. E: Failed to fetch http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) N: See apt-secure(8) manpage for repository creation and user configuration details. N: Updating from such a repository can't be done securely, and is therefore disabled by default. E: The repository 'http://archive.ubuntu.com/ubuntu focal InRelease' is not signed. E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) E: The repository 'http://archive.ubuntu.com/ubuntu focal-updates InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) E: The repository 'http://archive.ubuntu.com/ubuntu focal-backports InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. pod "busybox" deleted pod default/busybox terminated (Error) |
I have done some testing of this issue on my side today, and think I have another important part of this. The internal domain name also needs to be in the So from my understanding so far, these two things are important to my issue:
Example resolv.conf file:
|
@bzvestey that sounds reasonable ! I would accept a PR that would improve this ! |
@medyagh I have starting looking into this issue more and have hit a bit of a road block. From my digging into the code the entrypoint file linked below is the one responsible for setting up the resolve.conf file, but I don't know where to see the information that this file echo's out. Please correct me if I am wrong, but it seems that I have to build the minikube iso to test this? The below line returns my external IP Address: minikube/deploy/kicbase/entrypoint Line 313 in 9bccfd0
If you have any input for what I can do to test this, that would be awesome. Note: For those just looking for a work around to this issue, you can File Sync to add a custom resolv.conf. |
Hi @bzvestey, if you're modifying the |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
When the computer is on a network that has its internal domain name set to a domain that will resolve to an external IP, then minikube will try to use that external IP address for DNS resolution. Note that the network this was tested on does not have any special rules to make that domain name resolve differently internally. The busybox image running in minikubes kubernetes seems fine, and running other containers in the computers local docker run fine. For reference I am using a Unifi USG-Pro gateway for my router.
Steps to reproduce the issue:
minikube start
kubectl run busybox --image=busybox --rm -ti --restart=Never --command -- ping -c 3 google.com
--- google.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 3.500/3.743/4.006 ms
pod "busybox" deleted
kubectl run busybox --image=ubuntu --rm -ti --restart=Never --command -- bash -c "apt-get update && apt-get install -y iputils-ping && ping -c 3 google.com"
minikube ssh
cat /etc/resolv.conf
Full output of
minikube logs
command:Running on machine: bzvestey-worktop
Binary: Built with gc go1.16.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0612 15:11:25.031598 68916 out.go:291] Setting OutFile to fd 1 ...
I0612 15:11:25.031795 68916 out.go:343] isatty.IsTerminal(1) = true
I0612 15:11:25.031798 68916 out.go:304] Setting ErrFile to fd 2...
I0612 15:11:25.031802 68916 out.go:343] isatty.IsTerminal(2) = true
I0612 15:11:25.031907 68916 root.go:316] Updating PATH: /home/bzvestey/.minikube/bin
I0612 15:11:25.032141 68916 out.go:298] Setting JSON to false
I0612 15:11:25.049872 68916 start.go:108] hostinfo: {"hostname":"bzvestey-worktop","uptime":72467,"bootTime":1623463418,"procs":579,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"21.0.6","kernelVersion":"5.10.41-1-MANJARO","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"befe4676-f5e8-4a80-b53f-d1cc4840b3fd"}
I0612 15:11:25.049957 68916 start.go:118] virtualization: kvm host
I0612 15:11:25.060056 68916 out.go:170] 😄 minikube v1.20.0 on Arch 21.0.6
I0612 15:11:25.060327 68916 driver.go:322] Setting default libvirt URI to qemu:///system
I0612 15:11:25.060358 68916 global.go:103] Querying for installed drivers using PATH=/home/bzvestey/.minikube/bin:/home/bzvestey/.local/bin:/home/bzvestey/.local/bin:/usr/local/bin:/usr/bin:/var/lib/snapd/snap/bin:/usr/local/sbin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/bzvestey/dev/go/bin:/home/bzvestey/bin:/home/bzvestey/dev/go/bin:/home/bzvestey/bin
I0612 15:11:25.060382 68916 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0612 15:11:25.060483 68916 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0612 15:11:25.060523 68916 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0612 15:11:25.096616 68916 docker.go:119] docker version: linux-20.10.6
I0612 15:11:25.096699 68916 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0612 15:11:25.180352 68916 info.go:261] docker info: {ID:3MBP:4OW5:PVTW:COLX:IJJN:3WSA:SV74:3SNL:7SVG:E72B:5PO2:I36N Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:88 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2021-06-12 15:11:25.126214101 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.10.41-1-MANJARO OperatingSystem:Manjaro Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16644579328 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bzvestey-worktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:36cc874494a56a253cd181a1a685b44b58a2e34a.m Expected:36cc874494a56a253cd181a1a685b44b58a2e34a.m} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}}
I0612 15:11:25.180428 68916 docker.go:225] overlay module found
I0612 15:11:25.180434 68916 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0612 15:11:25.180483 68916 global.go:111] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I0612 15:11:25.187293 68916 global.go:111] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:}
I0612 15:11:25.187342 68916 global.go:111] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0612 15:11:25.187356 68916 driver.go:258] not recommending "ssh" due to default: false
I0612 15:11:25.187366 68916 driver.go:292] Picked: docker
I0612 15:11:25.187371 68916 driver.go:293] Alternatives: [ssh]
I0612 15:11:25.187374 68916 driver.go:294] Rejects: [virtualbox vmware kvm2 none podman]
I0612 15:11:25.196078 68916 out.go:170] ✨ Automatically selected the docker driver
I0612 15:11:25.196113 68916 start.go:276] selected driver: docker
I0612 15:11:25.196123 68916 start.go:718] validating driver "docker" against
I0612 15:11:25.196144 68916 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0612 15:11:25.196245 68916 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0612 15:11:25.278852 68916 info.go:261] docker info: {ID:3MBP:4OW5:PVTW:COLX:IJJN:3WSA:SV74:3SNL:7SVG:E72B:5PO2:I36N Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:88 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2021-06-12 15:11:25.222011444 -0700 PDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.10.41-1-MANJARO OperatingSystem:Manjaro Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16644579328 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:bzvestey-worktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:36cc874494a56a253cd181a1a685b44b58a2e34a.m Expected:36cc874494a56a253cd181a1a685b44b58a2e34a.m} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:}}
I0612 15:11:25.278941 68916 start_flags.go:259] no existing cluster config was found, will generate one from the flags
I0612 15:11:25.279811 68916 start_flags.go:314] Using suggested 3900MB memory alloc based on sys=15873MB, container=15873MB
I0612 15:11:25.279934 68916 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true]
I0612 15:11:25.279943 68916 cni.go:93] Creating CNI manager for ""
I0612 15:11:25.279952 68916 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0612 15:11:25.279964 68916 start_flags.go:273] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0612 15:11:25.288574 68916 out.go:170] 👍 Starting control plane node minikube in cluster minikube
I0612 15:11:25.288654 68916 cache.go:111] Beginning downloading kic base image for docker with docker
W0612 15:11:25.288670 68916 out.go:424] no arguments passed for "🚜 Pulling base image ...\n" - returning raw string
W0612 15:11:25.288712 68916 out.go:424] no arguments passed for "🚜 Pulling base image ...\n" - returning raw string
I0612 15:11:25.297286 68916 out.go:170] 🚜 Pulling base image ...
I0612 15:11:25.297362 68916 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0612 15:11:25.297482 68916 preload.go:106] Found local preload: /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0612 15:11:25.297500 68916 cache.go:54] Caching tarball of preloaded images
I0612 15:11:25.297568 68916 preload.go:132] Found /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0612 15:11:25.297558 68916 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
I0612 15:11:25.297589 68916 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
I0612 15:11:25.297614 68916 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
I0612 15:11:25.297642 68916 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
I0612 15:11:25.297781 68916 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
I0612 15:11:25.298731 68916 profile.go:148] Saving config to /home/bzvestey/.minikube/profiles/minikube/config.json ...
I0612 15:11:25.298790 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/config.json: {Name:mkda262e918d18c3c99523599978ad8dd65663d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:25.380892 68916 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull
I0612 15:11:25.380902 68916 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull
I0612 15:11:25.380911 68916 cache.go:194] Successfully downloaded all kic artifacts
I0612 15:11:25.380932 68916 start.go:313] acquiring machines lock for minikube: {Name:mk8ddead9fb15180016283278991bd9deb8e0cbc Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0612 15:11:25.380997 68916 start.go:317] acquired machines lock for "minikube" in 51.943µs
I0612 15:11:25.381017 68916 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0612 15:11:25.381086 68916 start.go:126] createHost starting for "" (driver="docker")
I0612 15:11:25.389962 68916 out.go:197] 🔥 Creating docker container (CPUs=2, Memory=3900MB) ...
I0612 15:11:25.390272 68916 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0612 15:11:25.390293 68916 client.go:168] LocalClient.Create starting
I0612 15:11:25.390362 68916 main.go:128] libmachine: Reading certificate data from /home/bzvestey/.minikube/certs/ca.pem
I0612 15:11:25.390389 68916 main.go:128] libmachine: Decoding PEM data...
I0612 15:11:25.390412 68916 main.go:128] libmachine: Parsing certificate...
I0612 15:11:25.390548 68916 main.go:128] libmachine: Reading certificate data from /home/bzvestey/.minikube/certs/cert.pem
I0612 15:11:25.390567 68916 main.go:128] libmachine: Decoding PEM data...
I0612 15:11:25.390578 68916 main.go:128] libmachine: Parsing certificate...
I0612 15:11:25.390914 68916 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0612 15:11:25.420254 68916 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0612 15:11:25.420295 68916 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0612 15:11:25.420307 68916 cli_runner.go:115] Run: docker network inspect minikube
W0612 15:11:25.453240 68916 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0612 15:11:25.453254 68916 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I0612 15:11:25.453261 68916 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I0612 15:11:25.453295 68916 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0612 15:11:25.482692 68916 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007e2010] misses:0}
I0612 15:11:25.482732 68916 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0612 15:11:25.482750 68916 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0612 15:11:25.482793 68916 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0612 15:11:25.566415 68916 network_create.go:84] docker network minikube 192.168.49.0/24 created
I0612 15:11:25.566434 68916 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0612 15:11:25.566482 68916 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0612 15:11:25.596804 68916 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0612 15:11:25.637179 68916 oci.go:102] Successfully created a docker volume minikube
I0612 15:11:25.637227 68916 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
I0612 15:11:26.712018 68916 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: (1.074704411s)
I0612 15:11:26.712059 68916 oci.go:106] Successfully prepared a docker volume minikube
W0612 15:11:26.712127 68916 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0612 15:11:26.712146 68916 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0612 15:11:26.712231 68916 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0612 15:11:26.712245 68916 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0612 15:11:26.712316 68916 preload.go:106] Found local preload: /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0612 15:11:26.712362 68916 kic.go:179] Starting extracting preloaded images to volume ...
I0612 15:11:26.712542 68916 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
I0612 15:11:26.840896 68916 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
I0612 15:11:27.588324 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0612 15:11:27.622016 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:11:27.664558 68916 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0612 15:11:27.734680 68916 oci.go:278] the created container "minikube" has a running status.
I0612 15:11:27.734730 68916 kic.go:210] Creating ssh key for kic: /home/bzvestey/.minikube/machines/minikube/id_rsa...
I0612 15:11:27.878454 68916 kic_runner.go:188] docker (temp): /home/bzvestey/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0612 15:11:27.961817 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:11:28.000333 68916 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0612 15:11:28.000344 68916 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0612 15:11:30.459373 68916 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (3.746740435s)
I0612 15:11:30.459401 68916 kic.go:188] duration metric: took 3.747038 seconds to extract preloaded images to volume
I0612 15:11:30.459551 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:11:30.504263 68916 machine.go:88] provisioning docker machine ...
I0612 15:11:30.504283 68916 ubuntu.go:169] provisioning hostname "minikube"
I0612 15:11:30.504324 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:30.533600 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:30.533773 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:30.533782 68916 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0612 15:11:30.713248 68916 main.go:128] libmachine: SSH cmd err, output: : minikube
I0612 15:11:30.713380 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:30.772346 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:30.772524 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:30.772542 68916 main.go:128] libmachine: About to run SSH command:
I0612 15:11:30.934710 68916 main.go:128] libmachine: SSH cmd err, output: :
I0612 15:11:30.934744 68916 ubuntu.go:175] set auth options {CertDir:/home/bzvestey/.minikube CaCertPath:/home/bzvestey/.minikube/certs/ca.pem CaPrivateKeyPath:/home/bzvestey/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/bzvestey/.minikube/machines/server.pem ServerKeyPath:/home/bzvestey/.minikube/machines/server-key.pem ClientKeyPath:/home/bzvestey/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/bzvestey/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/bzvestey/.minikube}
I0612 15:11:30.934774 68916 ubuntu.go:177] setting up certificates
I0612 15:11:30.934788 68916 provision.go:83] configureAuth start
I0612 15:11:30.934883 68916 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0612 15:11:30.988309 68916 provision.go:137] copyHostCerts
I0612 15:11:30.988349 68916 exec_runner.go:145] found /home/bzvestey/.minikube/cert.pem, removing ...
I0612 15:11:30.988355 68916 exec_runner.go:190] rm: /home/bzvestey/.minikube/cert.pem
I0612 15:11:30.988405 68916 exec_runner.go:152] cp: /home/bzvestey/.minikube/certs/cert.pem --> /home/bzvestey/.minikube/cert.pem (1127 bytes)
I0612 15:11:30.988488 68916 exec_runner.go:145] found /home/bzvestey/.minikube/key.pem, removing ...
I0612 15:11:30.988493 68916 exec_runner.go:190] rm: /home/bzvestey/.minikube/key.pem
I0612 15:11:30.988521 68916 exec_runner.go:152] cp: /home/bzvestey/.minikube/certs/key.pem --> /home/bzvestey/.minikube/key.pem (1679 bytes)
I0612 15:11:30.988570 68916 exec_runner.go:145] found /home/bzvestey/.minikube/ca.pem, removing ...
I0612 15:11:30.988575 68916 exec_runner.go:190] rm: /home/bzvestey/.minikube/ca.pem
I0612 15:11:30.988601 68916 exec_runner.go:152] cp: /home/bzvestey/.minikube/certs/ca.pem --> /home/bzvestey/.minikube/ca.pem (1082 bytes)
I0612 15:11:30.988640 68916 provision.go:111] generating server cert: /home/bzvestey/.minikube/machines/server.pem ca-key=/home/bzvestey/.minikube/certs/ca.pem private-key=/home/bzvestey/.minikube/certs/ca-key.pem org=bzvestey.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0612 15:11:31.263214 68916 provision.go:165] copyRemoteCerts
I0612 15:11:31.263278 68916 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0612 15:11:31.263305 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:31.291999 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:31.397043 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0612 15:11:31.448519 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
I0612 15:11:31.503746 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0612 15:11:31.555844 68916 provision.go:86] duration metric: configureAuth took 621.036171ms
I0612 15:11:31.555875 68916 ubuntu.go:193] setting minikube options for container-runtime
I0612 15:11:31.556329 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:31.612130 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:31.612325 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:31.612337 68916 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0612 15:11:31.785767 68916 main.go:128] libmachine: SSH cmd err, output: : overlay
I0612 15:11:31.785801 68916 ubuntu.go:71] root file system type: overlay
I0612 15:11:31.786293 68916 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0612 15:11:31.786404 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:31.839282 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:31.839423 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:31.839494 68916 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0612 15:11:32.029605 68916 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0612 15:11:32.029733 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:32.084496 68916 main.go:128] libmachine: Using SSH client type: native
I0612 15:11:32.084634 68916 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x5615759a27e0] 0x5615759a27a0 [] 0s} 127.0.0.1 49167 }
I0612 15:11:32.084648 68916 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0612 15:11:33.272377 68916 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-04-09 22:45:28.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-12 22:11:32.021504825 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0612 15:11:33.272409 68916 machine.go:91] provisioned docker machine in 2.768133143s$'\thost.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0612 15:11:33.272425 68916 client.go:171] LocalClient.Create took 7.882125864s
I0612 15:11:33.272488 68916 start.go:168] duration metric: libmachine.API.Create for "minikube" took 7.882185568s
I0612 15:11:33.272505 68916 start.go:267] post-start starting for "minikube" (driver="docker")
I0612 15:11:33.272515 68916 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0612 15:11:33.272651 68916 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0612 15:11:33.272742 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.316879 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.425880 68916 ssh_runner.go:149] Run: cat /etc/os-release
I0612 15:11:33.434987 68916 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0612 15:11:33.435025 68916 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0612 15:11:33.435051 68916 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0612 15:11:33.435062 68916 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0612 15:11:33.435089 68916 filesync.go:118] Scanning /home/bzvestey/.minikube/addons for local assets ...
I0612 15:11:33.435192 68916 filesync.go:118] Scanning /home/bzvestey/.minikube/files for local assets ...
I0612 15:11:33.435243 68916 start.go:270] post-start completed in 162.728095ms
I0612 15:11:33.435983 68916 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0612 15:11:33.489187 68916 profile.go:148] Saving config to /home/bzvestey/.minikube/profiles/minikube/config.json ...
I0612 15:11:33.489397 68916 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0612 15:11:33.489424 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.519786 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.614737 68916 start.go:129] duration metric: createHost completed in 8.233634677s
I0612 15:11:33.614772 68916 start.go:80] releasing machines lock for "minikube", held for 8.233761929s
I0612 15:11:33.614991 68916 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0612 15:11:33.684251 68916 ssh_runner.go:149] Run: systemctl --version
I0612 15:11:33.684304 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.684311 68916 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0612 15:11:33.684355 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:11:33.746475 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.748329 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:11:33.937826 68916 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0612 15:11:33.967308 68916 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0612 15:11:33.994246 68916 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0612 15:11:33.994336 68916 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0612 15:11:34.018387 68916 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0612 15:11:34.050586 68916 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0612 15:11:34.194137 68916 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0612 15:11:34.309795 68916 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0612 15:11:34.322642 68916 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0612 15:11:34.424015 68916 ssh_runner.go:149] Run: sudo systemctl start docker
I0612 15:11:34.435650 68916 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0612 15:11:34.496346 68916 out.go:197] 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
I0612 15:11:34.496431 68916 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0612 15:11:34.528404 68916 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0612 15:11:34.531930 68916 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0612 15:11:34.543783 68916 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0612 15:11:34.543801 68916 preload.go:106] Found local preload: /home/bzvestey/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0612 15:11:34.543829 68916 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0612 15:11:34.593751 68916 docker.go:528] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0612 15:11:34.593764 68916 docker.go:465] Images already preloaded, skipping extraction
I0612 15:11:34.593813 68916 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0612 15:11:34.656260 68916 docker.go:528] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0612 15:11:34.656284 68916 cache_images.go:74] Images are preloaded, skipping loading
I0612 15:11:34.656356 68916 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0612 15:11:34.747588 68916 cni.go:93] Creating CNI manager for ""
I0612 15:11:34.747599 68916 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0612 15:11:34.747605 68916 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0612 15:11:34.747615 68916 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0612 15:11:34.747722 68916 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
I0612 15:11:34.747795 68916 kubeadm.go:901] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]$'\tcontrol-plane.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0612 15:11:34.747843 68916 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0612 15:11:34.756138 68916 binaries.go:44] Found k8s binaries, skipping transfer
I0612 15:11:34.756183 68916 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0612 15:11:34.763875 68916 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0612 15:11:34.780067 68916 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0612 15:11:34.798181 68916 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes)
I0612 15:11:34.819804 68916 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0612 15:11:34.823895 68916 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0612 15:11:34.846012 68916 certs.go:52] Setting up /home/bzvestey/.minikube/profiles/minikube for IP: 192.168.49.2
I0612 15:11:34.846101 68916 certs.go:171] skipping minikubeCA CA generation: /home/bzvestey/.minikube/ca.key
I0612 15:11:34.846125 68916 certs.go:171] skipping proxyClientCA CA generation: /home/bzvestey/.minikube/proxy-client-ca.key
I0612 15:11:34.846186 68916 certs.go:286] generating minikube-user signed cert: /home/bzvestey/.minikube/profiles/minikube/client.key
I0612 15:11:34.846193 68916 crypto.go:69] Generating cert /home/bzvestey/.minikube/profiles/minikube/client.crt with IP's: []
I0612 15:11:34.998583 68916 crypto.go:157] Writing cert to /home/bzvestey/.minikube/profiles/minikube/client.crt ...
I0612 15:11:34.998596 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/client.crt: {Name:mk09ad8dc7b454626ff8a93652ee10868c89d096 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:34.998766 68916 crypto.go:165] Writing key to /home/bzvestey/.minikube/profiles/minikube/client.key ...
I0612 15:11:34.998772 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/client.key: {Name:mkb5449e63ecac2631fb2a0437febd315c2aaa4e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:34.998849 68916 certs.go:286] generating minikube signed cert: /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0612 15:11:34.998852 68916 crypto.go:69] Generating cert /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0612 15:11:35.187822 68916 crypto.go:157] Writing cert to /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0612 15:11:35.187832 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkd649bed18953a07110a5071f46f1480a29cedb Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.187993 68916 crypto.go:165] Writing key to /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0612 15:11:35.187998 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk5dd328d5476bf757a3370982c65b99c56751a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.188122 68916 certs.go:297] copying /home/bzvestey/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/bzvestey/.minikube/profiles/minikube/apiserver.crt
I0612 15:11:35.188177 68916 certs.go:301] copying /home/bzvestey/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/bzvestey/.minikube/profiles/minikube/apiserver.key
I0612 15:11:35.188210 68916 certs.go:286] generating aggregator signed cert: /home/bzvestey/.minikube/profiles/minikube/proxy-client.key
I0612 15:11:35.188213 68916 crypto.go:69] Generating cert /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0612 15:11:35.422805 68916 crypto.go:157] Writing cert to /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt ...
I0612 15:11:35.422814 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt: {Name:mkb512ef5b4b006e2f19bdc48cef45c20da3f0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.422990 68916 crypto.go:165] Writing key to /home/bzvestey/.minikube/profiles/minikube/proxy-client.key ...
I0612 15:11:35.423008 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.minikube/profiles/minikube/proxy-client.key: {Name:mk1b49955018fa31fdc904dff058bce856d17143 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:11:35.423163 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/ca-key.pem (1675 bytes)
I0612 15:11:35.423187 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/ca.pem (1082 bytes)
I0612 15:11:35.423204 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/cert.pem (1127 bytes)
I0612 15:11:35.423220 68916 certs.go:361] found cert: /home/bzvestey/.minikube/certs/home/bzvestey/.minikube/certs/key.pem (1679 bytes)
I0612 15:11:35.424246 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0612 15:11:35.444245 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0612 15:11:35.465714 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0612 15:11:35.487008 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0612 15:11:35.508099 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0612 15:11:35.531259 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0612 15:11:35.551114 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0612 15:11:35.570102 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0612 15:11:35.588711 68916 ssh_runner.go:316] scp /home/bzvestey/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0612 15:11:35.606447 68916 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0612 15:11:35.619455 68916 ssh_runner.go:149] Run: openssl version
I0612 15:11:35.624287 68916 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0612 15:11:35.632066 68916 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0612 15:11:35.635132 68916 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Jun 3 00:33 /usr/share/ca-certificates/minikubeCA.pem
I0612 15:11:35.635167 68916 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0612 15:11:35.639783 68916 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0612 15:11:35.647106 68916 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0612 15:11:35.647190 68916 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0612 15:11:35.682274 68916 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0612 15:11:35.689349 68916 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0612 15:11:35.696599 68916 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0612 15:11:35.696631 68916 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0612 15:11:35.703914 68916 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0612 15:11:35.703940 68916 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
W0612 15:12:02.455916 68916 out.go:424] no arguments passed for " ▪ Generating certificates and keys ..." - returning raw string
W0612 15:12:02.455970 68916 out.go:424] no arguments passed for " ▪ Generating certificates and keys ..." - returning raw string
I0612 15:12:02.468475 68916 out.go:197] ▪ Generating certificates and keys ...
W0612 15:12:02.473631 68916 out.go:424] no arguments passed for " ▪ Booting up control plane ..." - returning raw string
W0612 15:12:02.473675 68916 out.go:424] no arguments passed for " ▪ Booting up control plane ..." - returning raw string
I0612 15:12:02.483008 68916 out.go:197] ▪ Booting up control plane ...
W0612 15:12:02.487599 68916 out.go:424] no arguments passed for " ▪ Configuring RBAC rules ..." - returning raw string
W0612 15:12:02.487645 68916 out.go:424] no arguments passed for " ▪ Configuring RBAC rules ..." - returning raw string
I0612 15:12:02.496465 68916 out.go:197] ▪ Configuring RBAC rules ...
I0612 15:12:02.504504 68916 cni.go:93] Creating CNI manager for ""
I0612 15:12:02.504532 68916 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0612 15:12:02.504585 68916 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0612 15:12:02.504739 68916 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0612 15:12:02.504785 68916 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_12T15_12_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0612 15:12:04.335214 68916 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.83044449s)
I0612 15:12:04.335257 68916 kubeadm.go:977] duration metric: took 1.830678638s to wait for elevateKubeSystemPrivileges.
I0612 15:12:04.335261 68916 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_12T15_12_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.830456266s)
I0612 15:12:04.335306 68916 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.830706569s)
I0612 15:12:04.335318 68916 ops.go:34] apiserver oom_adj: -16
I0612 15:12:04.335325 68916 kubeadm.go:383] StartCluster complete in 28.688226429s
I0612 15:12:04.335341 68916 settings.go:142] acquiring lock: {Name:mk265c9bb5ded81493ce88fec9fb7405f670feba Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:12:04.335465 68916 settings.go:150] Updating kubeconfig: /home/bzvestey/.kube/config
I0612 15:12:04.336859 68916 lock.go:36] WriteFile acquiring /home/bzvestey/.kube/config: {Name:mk26dfde4f0ec489c8c85de45feb5ce9112d14e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0612 15:12:04.860362 68916 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0612 15:12:04.860430 68916 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
W0612 15:12:04.860475 68916 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string
W0612 15:12:04.860501 68916 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string
I0612 15:12:04.869175 68916 out.go:170] 🔎 Verifying Kubernetes components...
I0612 15:12:04.860575 68916 addons.go:328] enableAddons start: toEnable=map[], additional=[]
I0612 15:12:04.869369 68916 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0612 15:12:04.869380 68916 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0612 15:12:04.869408 68916 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0612 15:12:04.869415 68916 addons.go:131] Setting addon storage-provisioner=true in "minikube"
I0612 15:12:04.869415 68916 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0612 15:12:04.869433 68916 addons.go:140] addon storage-provisioner should already be in state true
I0612 15:12:04.869466 68916 host.go:66] Checking if "minikube" exists ...
I0612 15:12:04.870547 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:12:04.871124 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:12:04.904216 68916 api_server.go:50] waiting for apiserver process to appear ...
I0612 15:12:04.904258 68916 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0612 15:12:04.931993 68916 api_server.go:70] duration metric: took 71.522311ms to wait for apiserver process to appear ...
I0612 15:12:04.932007 68916 api_server.go:86] waiting for apiserver healthz status ...
I0612 15:12:04.932014 68916 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0612 15:12:04.948137 68916 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0612 15:12:04.948743 68916 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0612 15:12:04.948751 68916 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0612 15:12:04.948822 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:12:04.948925 68916 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
ok
I0612 15:12:04.949708 68916 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0612 15:12:04.949714 68916 addons.go:140] addon default-storageclass should already be in state true
I0612 15:12:04.949724 68916 host.go:66] Checking if "minikube" exists ...
I0612 15:12:04.949728 68916 api_server.go:139] control plane version: v1.20.2
I0612 15:12:04.949737 68916 api_server.go:129] duration metric: took 17.726309ms to wait for apiserver health ...
I0612 15:12:04.949742 68916 system_pods.go:43] waiting for kube-system pods to appear ...
I0612 15:12:04.950085 68916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0612 15:12:04.957923 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:04.957936 68916 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0612 15:12:04.990664 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:12:04.994286 68916 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
I0612 15:12:04.994301 68916 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0612 15:12:04.994365 68916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0612 15:12:05.028146 68916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49167 SSHKeyPath:/home/bzvestey/.minikube/machines/minikube/id_rsa Username:docker}
I0612 15:12:05.118867 68916 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0612 15:12:05.150373 68916 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0612 15:12:05.227483 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:05.227515 68916 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up
I0612 15:12:05.619307 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:05.619343 68916 retry.go:31] will retry after 422.765636ms: only 0 pod(s) have shown up
I0612 15:12:06.048611 68916 system_pods.go:59] 0 kube-system pods found
I0612 15:12:06.048637 68916 retry.go:31] will retry after 473.074753ms: only 0 pod(s) have shown up
I0612 15:12:06.397680 68916 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.278746897s)
I0612 15:12:06.397796 68916 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.247390297s)
I0612 15:12:06.406622 68916 out.go:170] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0612 15:12:06.406670 68916 addons.go:330] enableAddons completed in 1.546127869s
I0612 15:12:06.528110 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:06.528193 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:06.528217 68916 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0612 15:12:07.123029 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:07.123063 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:07.123079 68916 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0612 15:12:07.963903 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:07.963940 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:07.963967 68916 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0612 15:12:08.718191 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:08.718226 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:08.718244 68916 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
I0612 15:12:09.713129 68916 system_pods.go:59] 1 kube-system pods found
I0612 15:12:09.713163 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:09.713187 68916 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up
I0612 15:12:10.912032 68916 system_pods.go:59] 5 kube-system pods found
I0612 15:12:10.912060 68916 system_pods.go:61] "etcd-minikube" [c569ef90-8a53-4b5a-b420-1dca7f948c07] Pending
I0612 15:12:10.912078 68916 system_pods.go:61] "kube-apiserver-minikube" [86c613f9-b736-4754-b5d4-62bf399d4fdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0612 15:12:10.912091 68916 system_pods.go:61] "kube-controller-manager-minikube" [760628be-1be1-471c-b837-2176d72ca5f4] Pending
I0612 15:12:10.912104 68916 system_pods.go:61] "kube-scheduler-minikube" [efd4e47f-d618-4dfc-90f3-2c5bcc711af1] Pending
I0612 15:12:10.912116 68916 system_pods.go:61] "storage-provisioner" [5d2c4078-b894-4a90-af10-19b76724b357] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0612 15:12:10.912128 68916 system_pods.go:74] duration metric: took 5.962377984s to wait for pod list to return data ...
I0612 15:12:10.912143 68916 kubeadm.go:538] duration metric: took 6.051672951s to wait for : map[apiserver:true system_pods:true] ...
I0612 15:12:10.912167 68916 node_conditions.go:102] verifying NodePressure condition ...
I0612 15:12:10.920945 68916 node_conditions.go:122] node storage ephemeral capacity is 490690488Ki
I0612 15:12:10.920982 68916 node_conditions.go:123] node cpu capacity is 8
I0612 15:12:10.921005 68916 node_conditions.go:105] duration metric: took 8.829438ms to run NodePressure ...
I0612 15:12:10.921026 68916 start.go:206] waiting for startup goroutines ...
I0612 15:12:10.997874 68916 start.go:460] kubectl: 1.21.0, cluster: 1.20.2 (minor skew: 1)
I0612 15:12:11.006523 68916 out.go:170] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Sat 2021-06-12 22:11:28 UTC, end at Sat 2021-06-12 22:28:08 UTC. --
Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.445662724Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447616645Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447645873Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447667053Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 12 22:11:28 minikube dockerd[219]: time="2021-06-12T22:11:28.447678303Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.176250841Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253784511Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253810165Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253816258Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.253975162Z" level=info msg="Loading containers: start."
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.368268545Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.427954821Z" level=info msg="Loading containers: done."
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.800237914Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.800483019Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.800574214Z" level=info msg="Daemon has completed initialization"
Jun 12 22:11:29 minikube systemd[1]: Started Docker Application Container Engine.
Jun 12 22:11:29 minikube dockerd[219]: time="2021-06-12T22:11:29.877087196Z" level=info msg="API listen on /run/docker.sock"
Jun 12 22:11:32 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Jun 12 22:11:32 minikube systemd[1]: Stopping Docker Application Container Engine...
Jun 12 22:11:32 minikube dockerd[219]: time="2021-06-12T22:11:32.695204119Z" level=info msg="Processing signal 'terminated'"
Jun 12 22:11:32 minikube dockerd[219]: time="2021-06-12T22:11:32.696410803Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
Jun 12 22:11:32 minikube dockerd[219]: time="2021-06-12T22:11:32.697163767Z" level=info msg="Daemon shutdown complete"
Jun 12 22:11:32 minikube systemd[1]: docker.service: Succeeded.
Jun 12 22:11:32 minikube systemd[1]: Stopped Docker Application Container Engine.
Jun 12 22:11:32 minikube systemd[1]: Starting Docker Application Container Engine...
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.745871808Z" level=info msg="Starting up"
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747417024Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747436178Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747455669Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.747466761Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748270511Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748289830Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748304555Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.748313458Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.775311655Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.797835329Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.797892750Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.797912740Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 12 22:11:32 minikube dockerd[463]: time="2021-06-12T22:11:32.798303911Z" level=info msg="Loading containers: start."
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.049770659Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.166946074Z" level=info msg="Loading containers: done."
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.221511340Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.222184740Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.222319959Z" level=info msg="Daemon has completed initialization"
Jun 12 22:11:33 minikube systemd[1]: Started Docker Application Container Engine.
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.282613170Z" level=info msg="API listen on [::]:2376"
Jun 12 22:11:33 minikube dockerd[463]: time="2021-06-12T22:11:33.291277053Z" level=info msg="API listen on /var/run/docker.sock"
Jun 12 22:12:23 minikube dockerd[463]: time="2021-06-12T22:12:23.565037143Z" level=info msg="ignoring event" container=2fe52dc4fb65cfe1000206d03d21f32baf21d44fde30a0c132d09941396e5f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:12:23 minikube dockerd[463]: time="2021-06-12T22:12:23.848888850Z" level=info msg="ignoring event" container=e1f6af81dca776cae82b15508a5ffd1b32c9a45cf0c99bb7726778008cbe94de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:12:37 minikube dockerd[463]: time="2021-06-12T22:12:37.335033542Z" level=info msg="ignoring event" container=b975f30646965227d0b4fd6dd04198e224148d7ced8a39ba31d9876265772c57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:12:38 minikube dockerd[463]: time="2021-06-12T22:12:38.016586645Z" level=info msg="ignoring event" container=cb6ffd7fa039beba4cd195332819f5922b36230a1afa952151a23461607b2f6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:14:15 minikube dockerd[463]: time="2021-06-12T22:14:15.016229176Z" level=error msg="stream copy error: reading from a closed fifo"
Jun 12 22:14:15 minikube dockerd[463]: time="2021-06-12T22:14:15.077327539Z" level=error msg="aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08 cleanup: failed to delete container from containerd: no such container"
Jun 12 22:14:16 minikube dockerd[463]: time="2021-06-12T22:14:16.179957276Z" level=info msg="ignoring event" container=f5ec9f8c5e0ce8bef3c60c7bc9b7fe46f401abbd102b7a709fbe8c1238d16a67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:14:35 minikube dockerd[463]: time="2021-06-12T22:14:35.185766136Z" level=info msg="ignoring event" container=290ff0f288f7abbc2619938eee4a39b450da1fea246aad744eb416d60e22f58d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:14:35 minikube dockerd[463]: time="2021-06-12T22:14:35.677392385Z" level=info msg="ignoring event" container=a3208906e675de65a3b2114554d454517bf1cc88a2b1245a4d4c973816512628 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:16:40 minikube dockerd[463]: time="2021-06-12T22:16:40.800288239Z" level=info msg="ignoring event" container=42a2fdde8c6686d35ba40bebf98961d2460de1edd3c0775945a2d2b473995d85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:16:41 minikube dockerd[463]: time="2021-06-12T22:16:41.628082058Z" level=info msg="ignoring event" container=c765953fb20df9511007d0b062b29a07793bcbd69ff82f033ceb58fdddc942a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:23:52 minikube dockerd[463]: time="2021-06-12T22:23:52.548294386Z" level=info msg="ignoring event" container=fd5a9796d33a9ae99b585b01a545112d5dc1e856a705b56e8815e484c40c23d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 12 22:23:53 minikube dockerd[463]: time="2021-06-12T22:23:53.221215915Z" level=info msg="ignoring event" container=5097e60d2f5a55cf133bc9f1fb752817b6f8ff8299cdb0763d7650289f06ba5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
fc4f50e60692f 6e38f40d628db 15 minutes ago Running storage-provisioner 0 aa751c70d8b3d
5b73b7e1af54c bfe3a36ebd252 15 minutes ago Running coredns 0 67c0a247db719
1c6500f10134c 43154ddb57a83 15 minutes ago Running kube-proxy 0 6c9a08edeed0b
6b43427b7868b a27166429d98e 16 minutes ago Running kube-controller-manager 0 d437733366933
21f311d01e193 0369cf4303ffd 16 minutes ago Running etcd 0 8484322d0a298
354e894575b1e ed2c44fbdd78b 16 minutes ago Running kube-scheduler 0 e1e3658d6219b
7979a48a596d3 a8c2fdb8bf76e 16 minutes ago Running kube-apiserver 0 881d74ca3f80d
==> coredns [5b73b7e1af54] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
==> describe nodes <==
Name: minikube
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae-dirty
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2021_06_12T15_12_02_0700
minikube.k8s.io/version=v1.20.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 12 Jun 2021 22:11:57 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sat, 12 Jun 2021 22:28:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:11:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:11:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:11:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 12 Jun 2021 22:27:17 +0000 Sat, 12 Jun 2021 22:12:17 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 8
ephemeral-storage: 490690488Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16254472Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 490690488Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16254472Ki
pods: 110
System Info:
Machine ID: 822f5ed6656e44929f6c2cc5d6881453
System UUID: 0bb18266-804a-4154-8908-2db3b81dd84f
Boot ID: 0e9f1284-59c9-49dd-9ab8-7a92e6790ed7
Kernel Version: 5.10.41-1-MANJARO
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.6
Kubelet Version: v1.20.2
Kube-Proxy Version: v1.20.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system coredns-74ff55c5b-5xnmn 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (1%!)(MISSING) 15m
kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 15m
kube-system kube-apiserver-minikube 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
kube-system kube-controller-manager-minikube 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
kube-system kube-proxy-79d2v 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (1%!)(MISSING) 170Mi (1%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
Normal NodeHasSufficientMemory 16m (x6 over 16m) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 16m (x6 over 16m) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16m (x5 over 16m) kubelet Node minikube status is now: NodeHasSufficientPID
Normal Starting 15m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 15m kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeNotReady 15m kubelet Node minikube status is now: NodeNotReady
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 15m kubelet Node minikube status is now: NodeReady
Normal Starting 15m kube-proxy Starting kube-proxy.
==> dmesg <==
[Jun12 02:04] kauditd_printk_skb: 8 callbacks suppressed
[Jun12 02:07] kauditd_printk_skb: 82 callbacks suppressed
[Jun12 02:08] kauditd_printk_skb: 301 callbacks suppressed
[ +10.105821] kauditd_printk_skb: 203 callbacks suppressed
[ +0.136004] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[ +17.306307] kauditd_printk_skb: 9 callbacks suppressed
[ +16.284370] kauditd_printk_skb: 58 callbacks suppressed
[Jun12 02:09] kauditd_printk_skb: 9 callbacks suppressed
[Jun12 02:20] smpboot: Scheduler frequency invariance went wobbly, disabling!
[Jun12 02:21] done.
[ +0.259323] Bluetooth: hci0: unexpected event for opcode 0xfc2f
[Jun12 02:28] kauditd_printk_skb: 1 callbacks suppressed
[ +6.667590] process 'usr/local/bin/dgraph' started with executable stack
[Jun12 02:47] kauditd_printk_skb: 39 callbacks suppressed
[Jun12 03:15] kauditd_printk_skb: 83 callbacks suppressed
[ +5.103913] kauditd_printk_skb: 507 callbacks suppressed
[Jun12 03:16] kauditd_printk_skb: 8 callbacks suppressed
[ +16.059199] kauditd_printk_skb: 58 callbacks suppressed
[ +10.895986] kauditd_printk_skb: 39 callbacks suppressed
[Jun12 03:18] kauditd_printk_skb: 20 callbacks suppressed
[ +8.065799] kauditd_printk_skb: 20 callbacks suppressed
[Jun12 03:19] kauditd_printk_skb: 50 callbacks suppressed
[Jun12 03:23] kauditd_printk_skb: 29 callbacks suppressed
[Jun12 03:24] kauditd_printk_skb: 83 callbacks suppressed
[ +5.003114] kauditd_printk_skb: 496 callbacks suppressed
[ +10.235945] kauditd_printk_skb: 7 callbacks suppressed
[ +15.867294] kauditd_printk_skb: 8 callbacks suppressed
[Jun12 03:25] kauditd_printk_skb: 58 callbacks suppressed
[ +5.979394] kauditd_printk_skb: 13 callbacks suppressed
[Jun12 03:40] kauditd_printk_skb: 16 callbacks suppressed
==> etcd [21f311d01e19] <==
2021-06-12 22:18:56.956394 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:19:06.956567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:19:16.956441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:19:26.956397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:19:36.956604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:19:46.956361 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:19:56.956513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:20:06.956339 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:20:16.956789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:20:26.956399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:20:36.956264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:20:46.956509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:20:56.956587 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:21:06.956494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:21:16.956524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:21:26.956288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:21:36.956414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:21:46.956244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:21:54.387424 I | mvcc: store.index: compact 704
2021-06-12 22:21:54.389747 I | mvcc: finished scheduled compaction at 704 (took 1.851615ms)
2021-06-12 22:21:56.956516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:22:06.956313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:22:16.961264 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:22:26.956429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:22:36.956302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:22:46.956414 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:22:56.956297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:23:06.956389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:23:16.956327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:23:26.956411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:23:36.956409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:23:46.956588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:23:56.956345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:24:06.956507 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:24:16.956511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:24:26.956281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:24:36.956215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:24:46.956746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:24:56.966994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:25:06.956383 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:25:16.956297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:25:26.956492 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:25:36.956329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:25:46.956340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:25:56.956189 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:26:06.956238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:26:16.956376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:26:26.956298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:26:36.956325 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:26:46.956398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:26:54.404435 I | mvcc: store.index: compact 914
2021-06-12 22:26:54.406236 I | mvcc: finished scheduled compaction at 914 (took 1.220768ms)
2021-06-12 22:26:56.956434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:27:06.956368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:27:16.956476 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:27:26.956919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:27:36.956389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:27:46.956557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:27:56.956580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-12 22:28:06.956537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
22:28:08 up 20:24, 0 users, load average: 0.75, 0.93, 0.93
Linux minikube 5.10.41-1-MANJARO Need a reliable and low latency local cluster setup for Kubernetes #1 SMP PREEMPT Fri May 28 19:10:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
==> kube-apiserver [7979a48a596d] <==
I0612 22:15:42.711963 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:15:42.711992 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:16:12.745091 1 client.go:360] parsed scheme: "passthrough"
I0612 22:16:12.745174 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:16:12.745202 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:16:51.897368 1 client.go:360] parsed scheme: "passthrough"
I0612 22:16:51.897445 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:16:51.897466 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:17:31.586855 1 client.go:360] parsed scheme: "passthrough"
I0612 22:17:31.586949 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:17:31.586972 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:18:14.346876 1 client.go:360] parsed scheme: "passthrough"
I0612 22:18:14.346974 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:18:14.346998 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:18:45.829365 1 client.go:360] parsed scheme: "passthrough"
I0612 22:18:45.829420 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:18:45.829434 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:19:21.679055 1 client.go:360] parsed scheme: "passthrough"
I0612 22:19:21.679157 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:19:21.679190 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:20:01.714762 1 client.go:360] parsed scheme: "passthrough"
I0612 22:20:01.714846 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:20:01.714866 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:20:39.739793 1 client.go:360] parsed scheme: "passthrough"
I0612 22:20:39.739888 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:20:39.739913 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:21:18.045961 1 client.go:360] parsed scheme: "passthrough"
I0612 22:21:18.046039 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:21:18.046060 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:21:55.956875 1 client.go:360] parsed scheme: "passthrough"
I0612 22:21:55.956972 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:21:55.956998 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0612 22:22:05.451943 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
I0612 22:22:34.065865 1 client.go:360] parsed scheme: "passthrough"
I0612 22:22:34.065952 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:22:34.065977 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:23:06.109977 1 client.go:360] parsed scheme: "passthrough"
I0612 22:23:06.110055 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:23:06.110075 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:23:45.216027 1 client.go:360] parsed scheme: "passthrough"
I0612 22:23:45.216102 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:23:45.216122 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:24:17.312469 1 client.go:360] parsed scheme: "passthrough"
I0612 22:24:17.312555 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:24:17.312584 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:24:55.448135 1 client.go:360] parsed scheme: "passthrough"
I0612 22:24:55.448236 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:24:55.448291 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:25:31.249123 1 client.go:360] parsed scheme: "passthrough"
I0612 22:25:31.249203 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:25:31.249225 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:26:09.494124 1 client.go:360] parsed scheme: "passthrough"
I0612 22:26:09.494212 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:26:09.494239 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:26:53.033176 1 client.go:360] parsed scheme: "passthrough"
I0612 22:26:53.033270 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:26:53.033302 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0612 22:27:27.217751 1 client.go:360] parsed scheme: "passthrough"
I0612 22:27:27.217834 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0612 22:27:27.217856 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [6b43427b7868] <==
I0612 22:12:17.092088 1 controllermanager.go:554] Started "clusterrole-aggregation"
I0612 22:12:17.092207 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0612 22:12:17.092240 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I0612 22:12:17.339694 1 controllermanager.go:554] Started "root-ca-cert-publisher"
I0612 22:12:17.340118 1 publisher.go:98] Starting root CA certificate configmap publisher
I0612 22:12:17.340169 1 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0612 22:12:17.361044 1 shared_informer.go:247] Caches are synced for job
W0612 22:12:17.364614 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0612 22:12:17.391078 1 shared_informer.go:247] Caches are synced for ReplicationController
I0612 22:12:17.391225 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0612 22:12:17.414878 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0612 22:12:17.414976 1 shared_informer.go:247] Caches are synced for deployment
I0612 22:12:17.415029 1 shared_informer.go:247] Caches are synced for endpoint
I0612 22:12:17.415055 1 shared_informer.go:247] Caches are synced for service account
I0612 22:12:17.418546 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0612 22:12:17.426651 1 shared_informer.go:247] Caches are synced for PV protection
I0612 22:12:17.432496 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0612 22:12:17.434277 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0612 22:12:17.436003 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0612 22:12:17.437906 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0612 22:12:17.440081 1 shared_informer.go:247] Caches are synced for GC
I0612 22:12:17.440429 1 shared_informer.go:247] Caches are synced for crt configmap
I0612 22:12:17.440794 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0612 22:12:17.441999 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0612 22:12:17.442480 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0612 22:12:17.444777 1 shared_informer.go:247] Caches are synced for TTL
I0612 22:12:17.452051 1 shared_informer.go:247] Caches are synced for namespace
I0612 22:12:17.452096 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
I0612 22:12:17.452736 1 shared_informer.go:247] Caches are synced for node
I0612 22:12:17.452772 1 range_allocator.go:172] Starting range CIDR allocator
I0612 22:12:17.452783 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0612 22:12:17.452794 1 shared_informer.go:247] Caches are synced for cidrallocator
I0612 22:12:17.615783 1 shared_informer.go:247] Caches are synced for disruption
I0612 22:12:17.615845 1 disruption.go:339] Sending events to api server.
I0612 22:12:17.622667 1 shared_informer.go:247] Caches are synced for stateful set
I0612 22:12:17.637183 1 shared_informer.go:247] Caches are synced for PVC protection
I0612 22:12:17.714967 1 shared_informer.go:247] Caches are synced for daemon sets
I0612 22:12:17.715197 1 shared_informer.go:247] Caches are synced for persistent volume
I0612 22:12:17.716701 1 shared_informer.go:247] Caches are synced for attach detach
E0612 22:12:17.719274 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0612 22:12:17.726260 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5xnmn"
I0612 22:12:17.726969 1 shared_informer.go:247] Caches are synced for taint
I0612 22:12:17.727262 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0612 22:12:17.727475 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0612 22:12:17.727588 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0612 22:12:17.727728 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0612 22:12:17.727788 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0612 22:12:17.728309 1 request.go:655] Throttling request took 1.100971524s, request: GET:https://192.168.49.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
I0612 22:12:17.729463 1 shared_informer.go:247] Caches are synced for expand
I0612 22:12:17.748738 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0612 22:12:17.924898 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0612 22:12:17.935425 1 shared_informer.go:247] Caches are synced for resource quota
I0612 22:12:17.939572 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-79d2v"
I0612 22:12:17.942255 1 shared_informer.go:247] Caches are synced for HPA
I0612 22:12:18.127328 1 shared_informer.go:247] Caches are synced for garbage collector
I0612 22:12:18.143095 1 shared_informer.go:247] Caches are synced for garbage collector
I0612 22:12:18.143155 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0612 22:12:18.478349 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0612 22:12:18.478445 1 shared_informer.go:247] Caches are synced for resource quota
I0612 22:12:22.727974 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
==> kube-proxy [1c6500f10134] <==
I0612 22:12:18.964964 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0612 22:12:18.965021 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W0612 22:12:19.002803 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0612 22:12:19.002892 1 server_others.go:185] Using iptables Proxier.
I0612 22:12:19.003551 1 server.go:650] Version: v1.20.2
I0612 22:12:19.004119 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0612 22:12:19.004234 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0612 22:12:19.004781 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0612 22:12:19.005017 1 config.go:315] Starting service config controller
I0612 22:12:19.005114 1 shared_informer.go:240] Waiting for caches to sync for service config
I0612 22:12:19.005026 1 config.go:224] Starting endpoint slice config controller
I0612 22:12:19.005208 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0612 22:12:19.105619 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0612 22:12:19.105694 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [354e894575b1] <==
I0612 22:11:52.664853 1 serving.go:331] Generated self-signed cert in-memory
W0612 22:11:57.625028 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0612 22:11:57.625081 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0612 22:11:57.625126 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0612 22:11:57.625145 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0612 22:11:57.914789 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0612 22:11:57.915180 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0612 22:11:57.924930 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0612 22:11:57.925721 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0612 22:11:58.022740 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0612 22:11:58.023023 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0612 22:11:58.023317 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0612 22:11:58.023634 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0612 22:11:58.023991 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0612 22:11:58.024406 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0612 22:11:58.024753 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0612 22:11:58.025105 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0612 22:11:58.025100 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0612 22:11:58.025479 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0612 22:11:58.025799 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0612 22:11:58.026203 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0612 22:11:59.026368 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0612 22:11:59.064388 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0612 22:11:59.084504 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0612 22:11:59.147739 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0612 22:11:59.201220 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0612 22:11:59.250529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0612 22:11:59.328995 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0612 22:11:59.381636 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0612 22:11:59.493949 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0612 22:11:59.508788 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0612 22:11:59.587564 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0612 22:12:02.515698 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Sat 2021-06-12 22:11:28 UTC, end at Sat 2021-06-12 22:28:08 UTC. --
Jun 12 22:12:38 minikube kubelet[2565]: I0612 22:12:38.105842 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8" (UID: "b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 12 22:12:38 minikube kubelet[2565]: I0612 22:12:38.203599 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8-default-token-cvtkw") on node "minikube" DevicePath ""
Jun 12 22:12:38 minikube kubelet[2565]: W0612 22:12:38.985969 2565 pod_container_deletor.go:79] Container "cb6ffd7fa039beba4cd195332819f5922b36230a1afa952151a23461607b2f6c" not found in pod's containers
Jun 12 22:12:40 minikube kubelet[2565]: W0612 22:12:40.072901 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/b0ed6977-a3ad-4e1e-a82b-ec88fa45e1c8/volumes" does not exist
Jun 12 22:13:10 minikube kubelet[2565]: I0612 22:13:10.033294 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: b975f30646965227d0b4fd6dd04198e224148d7ced8a39ba31d9876265772c57
Jun 12 22:13:10 minikube kubelet[2565]: I0612 22:13:10.076075 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2fe52dc4fb65cfe1000206d03d21f32baf21d44fde30a0c132d09941396e5f4e
Jun 12 22:14:12 minikube kubelet[2565]: I0612 22:14:12.712977 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jun 12 22:14:12 minikube kubelet[2565]: I0612 22:14:12.823349 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw") pod "busybox" (UID: "816dc660-9a88-415c-9d84-b85bb61feb42")
Jun 12 22:14:13 minikube kubelet[2565]: W0612 22:14:13.665765 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:14:14 minikube kubelet[2565]: W0612 22:14:14.011475 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:14:15 minikube kubelet[2565]: W0612 22:14:15.022965 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:14:15 minikube kubelet[2565]: E0612 22:14:15.080410 2565 remote_runtime.go:251] StartContainer "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08" from runtime service failed: rpc error: code = Unknown desc = failed to start container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "curl": executable file not found in $PATH: unknown
Jun 12 22:14:15 minikube kubelet[2565]: E0612 22:14:15.080611 2565 kuberuntime_manager.go:829] container &Container{Name:busybox,Image:busybox,Command:[curl google.com],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cvtkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod busybox_default(816dc660-9a88-415c-9d84-b85bb61feb42): RunContainerError: failed to start container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "curl": executable file not found in $PATH: unknown
Jun 12 22:14:15 minikube kubelet[2565]: E0612 22:14:15.080699 2565 pod_workers.go:191] Error syncing pod 816dc660-9a88-415c-9d84-b85bb61feb42 ("busybox_default(816dc660-9a88-415c-9d84-b85bb61feb42)"), skipping: failed to "StartContainer" for "busybox" with RunContainerError: "failed to start container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08": Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: "curl": executable file not found in $PATH: unknown"
Jun 12 22:14:16 minikube kubelet[2565]: I0612 22:14:16.234508 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw") pod "816dc660-9a88-415c-9d84-b85bb61feb42" (UID: "816dc660-9a88-415c-9d84-b85bb61feb42")
Jun 12 22:14:16 minikube kubelet[2565]: I0612 22:14:16.239971 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "816dc660-9a88-415c-9d84-b85bb61feb42" (UID: "816dc660-9a88-415c-9d84-b85bb61feb42"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 12 22:14:16 minikube kubelet[2565]: I0612 22:14:16.334976 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/816dc660-9a88-415c-9d84-b85bb61feb42-default-token-cvtkw") on node "minikube" DevicePath ""
Jun 12 22:14:17 minikube kubelet[2565]: W0612 22:14:17.126433 2565 pod_container_deletor.go:79] Container "f5ec9f8c5e0ce8bef3c60c7bc9b7fe46f401abbd102b7a709fbe8c1238d16a67" not found in pod's containers
Jun 12 22:14:18 minikube kubelet[2565]: W0612 22:14:18.072658 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/816dc660-9a88-415c-9d84-b85bb61feb42/volumes" does not exist
Jun 12 22:14:32 minikube kubelet[2565]: I0612 22:14:32.626728 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jun 12 22:14:32 minikube kubelet[2565]: I0612 22:14:32.790001 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw") pod "busybox" (UID: "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d")
Jun 12 22:14:33 minikube kubelet[2565]: W0612 22:14:33.569943 2565 pod_container_deletor.go:79] Container "a3208906e675de65a3b2114554d454517bf1cc88a2b1245a4d4c973816512628" not found in pod's containers
Jun 12 22:14:33 minikube kubelet[2565]: W0612 22:14:33.570464 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:14:34 minikube kubelet[2565]: W0612 22:14:34.586008 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:14:35 minikube kubelet[2565]: W0612 22:14:35.604881 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.611826 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 290ff0f288f7abbc2619938eee4a39b450da1fea246aad744eb416d60e22f58d
Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.699218 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw") pod "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d" (UID: "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d")
Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.702734 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d" (UID: "e674eaeb-9bcb-42c6-8a42-4ae763c0b31d"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 12 22:14:35 minikube kubelet[2565]: I0612 22:14:35.799601 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d-default-token-cvtkw") on node "minikube" DevicePath ""
Jun 12 22:14:36 minikube kubelet[2565]: W0612 22:14:36.071640 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/e674eaeb-9bcb-42c6-8a42-4ae763c0b31d/volumes" does not exist
Jun 12 22:14:36 minikube kubelet[2565]: W0612 22:14:36.635628 2565 pod_container_deletor.go:79] Container "a3208906e675de65a3b2114554d454517bf1cc88a2b1245a4d4c973816512628" not found in pod's containers
Jun 12 22:15:10 minikube kubelet[2565]: I0612 22:15:10.223398 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 290ff0f288f7abbc2619938eee4a39b450da1fea246aad744eb416d60e22f58d
Jun 12 22:15:10 minikube kubelet[2565]: I0612 22:15:10.261528 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08
Jun 12 22:15:11 minikube kubelet[2565]: W0612 22:15:11.046742 2565 pod_container_deletor.go:79] Container "aa98a91cca6a76c03a007c729ae9b153eb2fb77734983063e0bf6e94e505de08" not found in pod's containers
Jun 12 22:16:31 minikube kubelet[2565]: I0612 22:16:31.290739 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jun 12 22:16:31 minikube kubelet[2565]: I0612 22:16:31.407712 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw") pod "busybox" (UID: "526cc68b-311f-4bf7-98ef-008d1bdafa36")
Jun 12 22:16:32 minikube kubelet[2565]: W0612 22:16:32.185562 2565 pod_container_deletor.go:79] Container "c765953fb20df9511007d0b062b29a07793bcbd69ff82f033ceb58fdddc942a5" not found in pod's containers
Jun 12 22:16:32 minikube kubelet[2565]: W0612 22:16:32.185862 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:16:33 minikube kubelet[2565]: W0612 22:16:33.199599 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:16:40 minikube kubelet[2565]: W0612 22:16:40.346614 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:16:41 minikube kubelet[2565]: W0612 22:16:41.559581 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:16:41 minikube kubelet[2565]: I0612 22:16:41.742509 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw") pod "526cc68b-311f-4bf7-98ef-008d1bdafa36" (UID: "526cc68b-311f-4bf7-98ef-008d1bdafa36")
Jun 12 22:16:41 minikube kubelet[2565]: I0612 22:16:41.748189 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "526cc68b-311f-4bf7-98ef-008d1bdafa36" (UID: "526cc68b-311f-4bf7-98ef-008d1bdafa36"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 12 22:16:41 minikube kubelet[2565]: I0612 22:16:41.842947 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/526cc68b-311f-4bf7-98ef-008d1bdafa36-default-token-cvtkw") on node "minikube" DevicePath ""
Jun 12 22:16:42 minikube kubelet[2565]: W0612 22:16:42.072494 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/526cc68b-311f-4bf7-98ef-008d1bdafa36/volumes" does not exist
Jun 12 22:16:42 minikube kubelet[2565]: W0612 22:16:42.591220 2565 pod_container_deletor.go:79] Container "c765953fb20df9511007d0b062b29a07793bcbd69ff82f033ceb58fdddc942a5" not found in pod's containers
Jun 12 22:17:10 minikube kubelet[2565]: I0612 22:17:10.400708 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: 42a2fdde8c6686d35ba40bebf98961d2460de1edd3c0775945a2d2b473995d85
Jun 12 22:23:50 minikube kubelet[2565]: I0612 22:23:50.141574 2565 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jun 12 22:23:50 minikube kubelet[2565]: I0612 22:23:50.291756 2565 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw") pod "busybox" (UID: "46735029-d9b3-4a13-af68-4168b570b317")
Jun 12 22:23:51 minikube kubelet[2565]: W0612 22:23:51.104887 2565 pod_container_deletor.go:79] Container "5097e60d2f5a55cf133bc9f1fb752817b6f8ff8299cdb0763d7650289f06ba5d" not found in pod's containers
Jun 12 22:23:51 minikube kubelet[2565]: W0612 22:23:51.105503 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:23:52 minikube kubelet[2565]: W0612 22:23:52.128694 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:23:53 minikube kubelet[2565]: W0612 22:23:53.145307 2565 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.151878 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: fd5a9796d33a9ae99b585b01a545112d5dc1e856a705b56e8815e484c40c23d5
Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.300953 2565 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw") pod "46735029-d9b3-4a13-af68-4168b570b317" (UID: "46735029-d9b3-4a13-af68-4168b570b317")
Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.303798 2565 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw" (OuterVolumeSpecName: "default-token-cvtkw") pod "46735029-d9b3-4a13-af68-4168b570b317" (UID: "46735029-d9b3-4a13-af68-4168b570b317"). InnerVolumeSpecName "default-token-cvtkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 12 22:23:53 minikube kubelet[2565]: I0612 22:23:53.401330 2565 reconciler.go:319] Volume detached for volume "default-token-cvtkw" (UniqueName: "kubernetes.io/secret/46735029-d9b3-4a13-af68-4168b570b317-default-token-cvtkw") on node "minikube" DevicePath ""
Jun 12 22:23:54 minikube kubelet[2565]: W0612 22:23:54.071557 2565 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/46735029-d9b3-4a13-af68-4168b570b317/volumes" does not exist
Jun 12 22:23:54 minikube kubelet[2565]: W0612 22:23:54.172782 2565 pod_container_deletor.go:79] Container "5097e60d2f5a55cf133bc9f1fb752817b6f8ff8299cdb0763d7650289f06ba5d" not found in pod's containers
Jun 12 22:24:10 minikube kubelet[2565]: I0612 22:24:10.569026 2565 scope.go:95] [topologymanager] RemoveContainer - Container ID: fd5a9796d33a9ae99b585b01a545112d5dc1e856a705b56e8815e484c40c23d5
==> storage-provisioner [fc4f50e60692] <==
I0612 22:12:27.560836 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0612 22:12:27.573720 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0612 22:12:27.573760 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0612 22:12:27.583017 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0612 22:12:27.583071 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d394ff4-de43-4213-9917-1d4e872e89fc", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_9487ce1c-c4e7-40cb-bc3d-8a773325a3f6 became leader
I0612 22:12:27.583185 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_9487ce1c-c4e7-40cb-bc3d-8a773325a3f6!
I0612 22:12:27.683451 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_9487ce1c-c4e7-40cb-bc3d-8a773325a3f6!
Note: Output of commands other that
minikube start
placed below the command.The text was updated successfully, but these errors were encountered: