Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

$TMPDIR fills with log files - change logging to create less inodes (add retention) #9156

Closed
limberger opened this issue Sep 2, 2020 · 18 comments · Fixed by #10425
Closed
Assignees
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@limberger
Copy link

limberger commented Sep 2, 2020

Steps to reproduce the issue:

  1. minikube start
    after some time (less than 24 hours):
/tmp> ls /tmp/minikube* | wc -l
-sh: /bin/ls: Argument list too long
/tmp> ls | wc -l
40622

after some minutes:

/tmp> find . -name 'minikube.evvdi021001054.c1316520.log*'  | wc -l
41049

Sample:

 cat minikube.evvdi021001054.user.log.INFO.20200902-081307.16605
Log file created at: 2020/09/02 08:13:07
Running on machine: evvdi021001054
Binary: Built with gc go1.14.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0902 08:13:07.807249   16605 translate.go:106] Setting Language to pt-BR ...
I0902 08:13:07.807616   16605 translate.go:88] Failed to load translation file for pt: Asset translations/pt.json not found
I0902 08:13:07.807902   16605 out.go:190] Setting OutFile to fd 1 ...
I0902 08:13:07.807926   16605 out.go:237] isatty.IsTerminal(1) = false
I0902 08:13:07.807932   16605 out.go:203] Setting ErrFile to fd 2...
I0902 08:13:07.807940   16605 out.go:237] isatty.IsTerminal(2) = false
I0902 08:13:07.808120   16605 root.go:272] Updating PATH: /home/user/.minikube/bin
I0902 08:13:07.808443   16605 cli_runner.go:109] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
I0902 08:13:07.890385   16605 main.go:115] libmachine: COMMAND: /usr/bin/VBoxManage showvminfo minikube --machinereadable
I0902 08:13:08.044881   16605 main.go:115] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="f354216b-61a8-4f88-adaf-5aef64c09e3e"
CfgFile="/home/user/.minikube/machines/minikube/minikube/minikube.vbox"
SnapFldr="/home/user/.minikube/machines/minikube/minikube/Snapshots"
LogFldr="/home/user/.minikube/machines/minikube/minikube/Logs"
hardwareuuid="f354216b-61a8-4f88-adaf-5aef64c09e3e"
memory=4000
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-09-02T11:02:49.675000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/home/user/.minikube/machines/minikube/boot2docker.iso"
"SATA-ImageUUID-0-0"="6c9f1868-0c49-41c9-8d27-82904d243aa0"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/home/user/.minikube/machines/minikube/disk.vmdk"
"SATA-ImageUUID-1-0"="aa799b3d-819b-4f73-82c3-a0fa1f079290"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027D180B9"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,34901,,22"
hostonlyadapter2="vboxnet0"
macaddress2="080027EEA9CB"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="pulse"
audio_out="off"
audio_in="off"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="hosthome"
SharedFolderPathMachineMapping1="/home"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/home/user/.minikube/machines/minikube/minikube/minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.42 r137960"
GuestAdditionsFacility_VirtualBox Base Driver=50,1599044600198
GuestAdditionsFacility_VirtualBox System Service=50,1599044600959
GuestAdditionsFacility_Seamless Mode=0,1599044600182
GuestAdditionsFacility_Graphics Mode=0,1599044600182
}
I0902 08:13:08.044940   16605 main.go:115] libmachine: STDERR:
{
}
I0902 08:13:08.054330   16605 out.go:105] {"invalid":[],"valid":[{"Name":"minikube","Status":"Running","Config":{"Name":"minikube","KeepContext":false,"EmbedCerts":false,"MinikubeISO":"https://storage.googleapis.com/minikube/iso/minikube-v1.12.2.iso","KicBaseImage":"gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0","Memory":4000,"CPUs":2,"DiskSize":20000,"VMDriver":"","Driver":"virtualbox","HyperkitVpnKitSock":"","HyperkitVSockPorts":[],"DockerEnv":[],"InsecureRegistry":null,"RegistryMirror":null,"HostOnlyCIDR":"192.168.99.1/24","HypervVirtualSwitch":"","HypervUseExternalSwitch":false,"HypervExternalAdapter":"","KVMNetwork":"default","KVMQemuURI":"qemu:///system","KVMGPU":false,"KVMHidden":false,"DockerOpt":null,"DisableDriverMounts":false,"NFSShare":[],"NFSSharesRoot":"/nfsshares","UUID":"","NoVTXCheck":false,"DNSProxy":false,"HostDNSResolver":true,"HostOnlyNicType":"virtio","NatNicType":"virtio","KubernetesConfig":{"KubernetesVersion":"v1.18.3","ClusterName":"minikube","APIServerName":"minikubeCA","APIServerNames":null,"APIServerIPs":null,"DNSDomain":"cluster.local","ContainerRuntime":"docker","CRISocket":"","NetworkPlugin":"","FeatureGates":"","ServiceCIDR":"10.96.0.0/12","ImageRepository":"","LoadBalancerStartIP":"","LoadBalancerEndIP":"","ExtraOptions":null,"ShouldLoadCachedImages":true,"EnableDefaultCNI":false,"CNI":"","NodeIP":"","NodePort":8443,"NodeName":""},"Nodes":[{"Name":"","IP":"192.168.99.109","Port":8443,"KubernetesVersion":"v1.18.3","ControlPlane":true,"Worker":true}],"Addons":{"default-storageclass":true,"storage-provisioner":true},"VerifyComponents":{"apiserver":true,"system_pods":true},"StartHostTimeout":360000000000}}]}

Full output of minikube start command used, if not already included:

/tmp> minikube start
* minikube v1.12.3 on "Sled" "15-SP1"
* Automatically selected the docker driver

! 'docker' driver reported a issue that could affect the performance.
* Suggestion: enable overlayfs kernel module on your Linux

* Starting control plane node minikube in cluster minikube
* Pulling base image ...
E0902 08:01:46.435741    2868 cache.go:175] Error downloading kic artifacts:  failed to download kic base image or any fallback image
* Creating docker container (CPUs=2, Memory=4000MB) ...
! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0' locally
docker: Error response from daemon: Get https://gcr.io/v2/: read tcp 172.16.34.2:59014->172.217.192.82:443: read: connection reset by peer.
See 'docker run --help'.

* docker "minikube" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Failed to start docker container. "minikube start" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0' locally
docker: Error response from daemon: Get https://gcr.io/v2/: read tcp 172.16.34.2:59016->172.217.192.82:443: read: connection reset by peer.
See 'docker run --help'.

! Startup with docker driver failed, trying with alternate driver virtualbox: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0' locally
docker: Error response from daemon: Get https://gcr.io/v2/: read tcp 172.16.34.2:59016->172.217.192.82:443: read: connection reset by peer.
See 'docker run --help'.

* Deleting "minikube" in docker ...
* Removing /home/user/.minikube/machines/minikube ...
* Removed all traces of the "minikube" cluster.
* Starting control plane node minikube in cluster minikube
E0902 08:02:32.411126    2868 cache.go:175] Error downloading kic artifacts:  failed to download kic base image or any fallback image
* Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
! This VM is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"


! /usr/bin/kubectl is version 1.16.2, which may be incompatible with Kubernetes 1.18.3.
* You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command:

``` :/tmp> minikube logs * ==> Docker <== * -- Logs begin at Wed 2020-09-02 11:03:20 UTC, end at Wed 2020-09-02 11:10:52 UTC. -- * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.481303522Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.481321053Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.481335261Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.481367589Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.481519332Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.481702400Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.483497112Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.483717628Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.483842014Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484565309Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484715743Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484734864Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484778095Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484798714Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484812351Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484857097Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.484909313Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.485026794Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.485291497Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.485350248Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.485392622Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.485970396Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.486185151Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.486204676Z" level=info msg="containerd successfully booted in 0.015291s" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.510947840Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.511115335Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.511164471Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.511184889Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.514386979Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.514652342Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.514821047Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.514956865Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.842591416Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.842986073Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.843108879Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.843204210Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.843388429Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.843513003Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Sep 02 11:04:20 minikube dockerd[4062]: time="2020-09-02T11:04:20.846081130Z" level=info msg="Loading containers: start." * Sep 02 11:04:21 minikube dockerd[4062]: time="2020-09-02T11:04:21.182081858Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Sep 02 11:04:21 minikube dockerd[4062]: time="2020-09-02T11:04:21.310756038Z" level=info msg="Loading containers: done." * Sep 02 11:04:21 minikube dockerd[4062]: time="2020-09-02T11:04:21.359596525Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12 * Sep 02 11:04:21 minikube dockerd[4062]: time="2020-09-02T11:04:21.360123299Z" level=info msg="Daemon has completed initialization" * Sep 02 11:04:21 minikube dockerd[4062]: time="2020-09-02T11:04:21.427390283Z" level=info msg="API listen on /var/run/docker.sock" * Sep 02 11:04:21 minikube systemd[1]: Started Docker Application Container Engine. * Sep 02 11:04:21 minikube dockerd[4062]: time="2020-09-02T11:04:21.433676017Z" level=info msg="API listen on [::]:2376" * Sep 02 11:04:41 minikube dockerd[4062]: time="2020-09-02T11:04:41.745016238Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e8047b055a2eccf9407ff79731e4b87d0ea8d50ac36a38b4b3f92ccd1aabb95a/shim.sock" debug=false pid=4948 * Sep 02 11:04:41 minikube dockerd[4062]: time="2020-09-02T11:04:41.808047453Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d2f870d98edcfaee77e55f852ed4f06ec076da157013683afde4f61230a77392/shim.sock" debug=false pid=4974 * Sep 02 11:04:42 minikube dockerd[4062]: time="2020-09-02T11:04:42.281259638Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ba064bb942b9a4d170541605147c6844e54a36d2e692d43b6d6f8cb44860e6d6/shim.sock" debug=false pid=5083 * Sep 02 11:04:42 minikube dockerd[4062]: time="2020-09-02T11:04:42.563655378Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5dc3e869adfae9872aeda80c13214475fe6f83936f35e53f84e528e50da9fe90/shim.sock" debug=false pid=5131 * Sep 02 11:04:42 minikube dockerd[4062]: time="2020-09-02T11:04:42.670382214Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ccbf0d624030890bebe39f9be9db70bb17cdfbf0adb6709d6b6b7d69ea5ccc51/shim.sock" debug=false pid=5154 * Sep 02 11:04:42 minikube dockerd[4062]: time="2020-09-02T11:04:42.745856619Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d2cf966a8b06072387909c355d4d09476d705583a5517d56fe2261a48252003f/shim.sock" debug=false pid=5174 * Sep 02 11:04:42 minikube dockerd[4062]: time="2020-09-02T11:04:42.960951603Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b8242fe570b1a79d6f8e311cc045c5f98e452b409c1aeb2794c1a4ea732e200d/shim.sock" debug=false pid=5237 * Sep 02 11:04:43 minikube dockerd[4062]: time="2020-09-02T11:04:43.677068999Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/88075e13fb845a91e49b2ac30a2296944eb9d027108852fa8561389d81cd4c17/shim.sock" debug=false pid=5341 * Sep 02 11:05:12 minikube dockerd[4062]: time="2020-09-02T11:05:12.269370105Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/127b449001d4aa2b152f94839d9a14dc77edb7f741a07a4da41750c27e916f99/shim.sock" debug=false pid=6045 * Sep 02 11:05:14 minikube dockerd[4062]: time="2020-09-02T11:05:14.753658967Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92249ee70f87cdae76715a84dcdc5254b9c141d50f934c7ab06c009c7b3db08f/shim.sock" debug=false pid=6090 * Sep 02 11:05:24 minikube dockerd[4062]: time="2020-09-02T11:05:24.498428885Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a437225656ca6ba0762769f431c756be96c0322e7a3efe89a9108d771c1d6d13/shim.sock" debug=false pid=6286 * Sep 02 11:05:24 minikube dockerd[4062]: time="2020-09-02T11:05:24.516664773Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24e8bfce834c689b256d2f816c52044548f871819304847a70bb690ab53fd5a4/shim.sock" debug=false pid=6287 * Sep 02 11:05:25 minikube dockerd[4062]: time="2020-09-02T11:05:25.276692077Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8ab3858b6cdb91b479630d398718273926124ae6213324bb4c80555d24d18b5b/shim.sock" debug=false pid=6378 * Sep 02 11:05:25 minikube dockerd[4062]: time="2020-09-02T11:05:25.860708539Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/25d3c5743e02395b54f701874bf92f83c4ac27dac58de5043bb448149b24a737/shim.sock" debug=false pid=6429 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * 25d3c5743e023 67da37a9a360e 5 minutes ago Running coredns 0 a437225656ca6 * 8ab3858b6cdb9 9c3ca9f065bb1 5 minutes ago Running storage-provisioner 0 24e8bfce834c6 * 92249ee70f87c 3439b7546f29b 5 minutes ago Running kube-proxy 0 127b449001d4a * 88075e13fb845 303ce5db0e90d 6 minutes ago Running etcd 0 5dc3e869adfae * b8242fe570b1a 76216c34ed0c7 6 minutes ago Running kube-scheduler 0 ba064bb942b9a * d2cf966a8b060 da26705ccb4b5 6 minutes ago Running kube-controller-manager 0 d2f870d98edcf * ccbf0d6240308 7e28efa976bd1 6 minutes ago Running kube-apiserver 0 e8047b055a2ec * * ==> coredns [25d3c5743e02] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=2243b4b97c131e3244c5f014faedca0d846599f5-dirty * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_09_02T08_05_02_0700 * minikube.k8s.io/version=v1.12.3 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Wed, 02 Sep 2020 11:04:58 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Wed, 02 Sep 2020 11:10:50 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Wed, 02 Sep 2020 11:07:43 +0000 Wed, 02 Sep 2020 11:04:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Wed, 02 Sep 2020 11:07:43 +0000 Wed, 02 Sep 2020 11:04:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Wed, 02 Sep 2020 11:07:43 +0000 Wed, 02 Sep 2020 11:04:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Wed, 02 Sep 2020 11:07:43 +0000 Wed, 02 Sep 2020 11:05:20 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 192.168.99.109 * Hostname: minikube * Capacity: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 3934852Ki * pods: 110 * Allocatable: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 3934852Ki * pods: 110 * System Info: * Machine ID: 8fb9096840ae4db78deebfb856c9e49d * System UUID: f354216b-61a8-4f88-adaf-5aef64c09e3e * Boot ID: dbc9b935-bc26-46bd-9ba7-e3ed79a60554 * Kernel Version: 4.19.114 * OS Image: Buildroot 2019.02.11 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.12 * Kubelet Version: v1.18.3 * Kube-Proxy Version: v1.18.3 * Non-terminated Pods: (7 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-66bff467f8-p2qvt 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 5m46s * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m43s * kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m43s * kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m43s * kube-system kube-proxy-g7f9s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m45s * kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m43s * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m49s * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 650m (32%) 0 (0%) * memory 70Mi (1%) 170Mi (4%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal NodeHasSufficientMemory 6m12s (x7 over 6m12s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 6m12s (x7 over 6m12s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 6m12s (x6 over 6m12s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 5m45s kubelet, minikube Starting kubelet. * Normal NodeHasSufficientMemory 5m44s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 5m44s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 5m44s kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal NodeAllocatableEnforced 5m43s kubelet, minikube Updated Node Allocatable limit across pods * Normal Starting 5m37s kube-proxy, minikube Starting kube-proxy. * Normal NodeReady 5m33s kubelet, minikube Node minikube status is now: NodeReady * * ==> dmesg <== * [ +4.954583] hpet1: lost 316 rtc interrupts * [Sep 2 11:06] hpet1: lost 318 rtc interrupts * [ +5.024408] hpet1: lost 319 rtc interrupts * [ +4.984644] hpet1: lost 317 rtc interrupts * [ +4.998812] hpet1: lost 318 rtc interrupts * [ +5.008332] hpet1: lost 319 rtc interrupts * [ +4.993191] hpet1: lost 317 rtc interrupts * [ +5.021803] hpet1: lost 321 rtc interrupts * [ +4.981258] hpet1: lost 317 rtc interrupts * [ +5.034212] hpet1: lost 321 rtc interrupts * [ +5.001166] hpet1: lost 318 rtc interrupts * [ +4.988043] hpet1: lost 318 rtc interrupts * [ +5.002682] hpet1: lost 318 rtc interrupts * [Sep 2 11:07] hpet1: lost 319 rtc interrupts * [ +4.985106] hpet1: lost 318 rtc interrupts * [ +5.001318] hpet1: lost 319 rtc interrupts * [ +4.998940] hpet1: lost 318 rtc interrupts * [ +5.019361] hpet1: lost 319 rtc interrupts * [ +4.993140] hpet1: lost 318 rtc interrupts * [ +5.012824] hpet1: lost 319 rtc interrupts * [ +4.997312] hpet1: lost 318 rtc interrupts * [ +5.001332] hpet1: lost 318 rtc interrupts * [ +5.001639] hpet1: lost 318 rtc interrupts * [ +5.008753] hpet1: lost 319 rtc interrupts * [ +5.002465] hpet1: lost 318 rtc interrupts * [Sep 2 11:08] hpet1: lost 320 rtc interrupts * [ +4.967552] hpet1: lost 316 rtc interrupts * [ +5.000928] hpet1: lost 319 rtc interrupts * [ +5.003784] hpet1: lost 319 rtc interrupts * [ +4.995409] hpet1: lost 317 rtc interrupts * [ +5.000417] hpet1: lost 318 rtc interrupts * [ +4.999847] hpet1: lost 318 rtc interrupts * [ +4.999549] hpet1: lost 319 rtc interrupts * [ +4.999126] hpet1: lost 318 rtc interrupts * [ +5.000243] hpet1: lost 319 rtc interrupts * [ +5.010066] hpet1: lost 319 rtc interrupts * [ +5.000997] hpet1: lost 318 rtc interrupts * [Sep 2 11:09] hpet1: lost 318 rtc interrupts * [ +4.998530] hpet1: lost 318 rtc interrupts * [ +5.004056] hpet1: lost 318 rtc interrupts * [ +4.995696] hpet1: lost 318 rtc interrupts * [ +5.000786] hpet1: lost 318 rtc interrupts * [ +5.003414] hpet1: lost 318 rtc interrupts * [ +4.993801] hpet1: lost 317 rtc interrupts * [ +5.007970] hpet1: lost 319 rtc interrupts * [ +5.007909] hpet1: lost 318 rtc interrupts * [ +5.001095] hpet1: lost 319 rtc interrupts * [ +4.994792] hpet1: lost 317 rtc interrupts * [ +5.005532] hpet1: lost 319 rtc interrupts * [Sep 2 11:10] hpet1: lost 318 rtc interrupts * [ +4.991024] hpet1: lost 318 rtc interrupts * [ +5.004763] hpet1: lost 318 rtc interrupts * [ +4.990445] hpet1: lost 318 rtc interrupts * [ +5.002775] hpet1: lost 318 rtc interrupts * [ +4.996689] hpet1: lost 318 rtc interrupts * [ +4.998669] hpet1: lost 317 rtc interrupts * [ +5.005289] hpet1: lost 319 rtc interrupts * [ +5.003642] hpet1: lost 318 rtc interrupts * [ +4.996049] hpet1: lost 319 rtc interrupts * [ +5.006671] hpet1: lost 318 rtc interrupts * * ==> etcd [88075e13fb84] <== * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-09-02 11:04:46.993293 I | etcdmain: etcd Version: 3.4.3 * 2020-09-02 11:04:46.993536 I | etcdmain: Git SHA: 3cf2f69b5 * 2020-09-02 11:04:46.993647 I | etcdmain: Go Version: go1.12.12 * 2020-09-02 11:04:46.993791 I | etcdmain: Go OS/Arch: linux/amd64 * 2020-09-02 11:04:46.993969 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-09-02 11:04:46.994433 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-09-02 11:04:46.995833 I | embed: name = minikube * 2020-09-02 11:04:46.995990 I | embed: data dir = /var/lib/minikube/etcd * 2020-09-02 11:04:47.016552 I | embed: member dir = /var/lib/minikube/etcd/member * 2020-09-02 11:04:47.016607 I | embed: heartbeat = 100ms * 2020-09-02 11:04:47.016615 I | embed: election = 1000ms * 2020-09-02 11:04:47.016622 I | embed: snapshot count = 10000 * 2020-09-02 11:04:47.016661 I | embed: advertise client URLs = https://192.168.99.109:2379 * 2020-09-02 11:04:47.080729 I | etcdserver: starting member 409eb56b6c648857 in cluster 788b527ace944ac8 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 switched to configuration voters=() * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 became follower at term 0 * raft2020/09/02 11:04:47 INFO: newRaft 409eb56b6c648857 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 became follower at term 1 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 switched to configuration voters=(4656358537732327511) * 2020-09-02 11:04:47.099575 W | auth: simple token is not cryptographically signed * 2020-09-02 11:04:47.107490 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-09-02 11:04:47.112903 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-09-02 11:04:47.113469 I | embed: listening for metrics on http://127.0.0.1:2381 * 2020-09-02 11:04:47.114080 I | etcdserver: 409eb56b6c648857 as single-node; fast-forwarding 9 ticks (election ticks 10) * 2020-09-02 11:04:47.115044 I | embed: listening for peers on 192.168.99.109:2380 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 switched to configuration voters=(4656358537732327511) * 2020-09-02 11:04:47.116000 I | etcdserver/membership: added member 409eb56b6c648857 [https://192.168.99.109:2380] to cluster 788b527ace944ac8 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 is starting a new election at term 1 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 became candidate at term 2 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 received MsgVoteResp from 409eb56b6c648857 at term 2 * raft2020/09/02 11:04:47 INFO: 409eb56b6c648857 became leader at term 2 * raft2020/09/02 11:04:47 INFO: raft.node: 409eb56b6c648857 elected leader 409eb56b6c648857 at term 2 * 2020-09-02 11:04:47.990522 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.99.109:2379]} to cluster 788b527ace944ac8 * 2020-09-02 11:04:47.992028 I | embed: ready to serve client requests * 2020-09-02 11:04:47.993388 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-09-02 11:04:47.995462 I | embed: ready to serve client requests * 2020-09-02 11:04:47.999119 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-09-02 11:04:47.999476 I | embed: serving client requests on 127.0.0.1:2379 * 2020-09-02 11:04:47.999871 I | embed: serving client requests on 192.168.99.109:2379 * 2020-09-02 11:04:48.011934 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-09-02 11:05:08.123057 W | etcdserver: request "header: txn: success:> failure:<>>" with result "size:16" took too long (244.65921ms) to execute * 2020-09-02 11:05:08.153677 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-p2qvt\" " with result "range_response_count:1 size:3753" took too long (297.313684ms) to execute * 2020-09-02 11:05:08.154007 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-public/default\" " with result "range_response_count:1 size:181" took too long (135.278232ms) to execute * 2020-09-02 11:06:35.700892 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:600" took too long (444.205065ms) to execute * 2020-09-02 11:06:35.701937 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "range_response_count:0 size:7" took too long (755.932298ms) to execute * 2020-09-02 11:06:39.767879 W | wal: sync duration of 2.021808935s, expected less than 1s * 2020-09-02 11:06:39.784299 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.626356897s) to execute * 2020-09-02 11:06:39.791392 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (259.301685ms) to execute * * ==> kernel <== * 11:10:55 up 7 min, 0 users, load average: 0.60, 1.64, 0.99 * Linux minikube 4.19.114 #1 SMP Mon Aug 3 12:35:22 PDT 2020 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2019.02.11" * * ==> kube-apiserver [ccbf0d624030] <== * I0902 11:04:51.139042 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. * I0902 11:04:51.139486 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. * I0902 11:04:51.144051 1 client.go:361] parsed scheme: "endpoint" * I0902 11:04:51.144284 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0902 11:04:51.174117 1 client.go:361] parsed scheme: "endpoint" * I0902 11:04:51.174318 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0902 11:04:56.218638 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0902 11:04:56.219391 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0902 11:04:56.219829 1 secure_serving.go:178] Serving securely on [::]:8443 * I0902 11:04:56.220680 1 controller.go:81] Starting OpenAPI AggregationController * I0902 11:04:56.223016 1 available_controller.go:387] Starting AvailableConditionController * I0902 11:04:56.223113 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * I0902 11:04:56.223535 1 crd_finalizer.go:266] Starting CRDFinalizer * I0902 11:04:56.220683 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0902 11:04:56.232158 1 controller.go:86] Starting OpenAPI controller * I0902 11:04:56.232301 1 customresource_discovery_controller.go:209] Starting DiscoveryController * I0902 11:04:56.232319 1 naming_controller.go:291] Starting NamingConditionController * I0902 11:04:56.232341 1 establishing_controller.go:76] Starting EstablishingController * I0902 11:04:56.232358 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0902 11:04:56.232390 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * I0902 11:04:56.220731 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0902 11:04:56.232716 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0902 11:04:56.233995 1 autoregister_controller.go:141] Starting autoregister controller * I0902 11:04:56.234011 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0902 11:04:56.220742 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0902 11:04:56.241448 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0902 11:04:56.241811 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller * E0902 11:04:56.250668 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.109, ResourceVersion: 0, AdditionalErrorMsg: * I0902 11:04:56.252641 1 log.go:172] http: TLS handshake error from 192.168.99.109:58642: EOF * I0902 11:04:56.258540 1 log.go:172] http: TLS handshake error from 192.168.99.109:58644: EOF * I0902 11:04:56.263439 1 log.go:172] http: TLS handshake error from 192.168.99.109:58646: EOF * I0902 11:04:56.268768 1 log.go:172] http: TLS handshake error from 192.168.99.109:58648: EOF * I0902 11:04:56.274606 1 log.go:172] http: TLS handshake error from 192.168.99.109:58650: EOF * I0902 11:04:56.279468 1 log.go:172] http: TLS handshake error from 192.168.99.109:58652: EOF * I0902 11:04:56.284561 1 log.go:172] http: TLS handshake error from 192.168.99.109:58654: EOF * I0902 11:04:56.387958 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0902 11:04:56.390730 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister * I0902 11:04:56.390965 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0902 11:04:56.391023 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0902 11:04:56.499340 1 shared_informer.go:230] Caches are synced for crd-autoregister * I0902 11:04:56.527218 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0902 11:04:56.536787 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0902 11:04:56.556888 1 cache.go:39] Caches are synced for autoregister controller * I0902 11:04:56.558224 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller * I0902 11:04:57.219241 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0902 11:04:57.222571 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0902 11:04:57.247755 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 * I0902 11:04:57.284094 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 * I0902 11:04:57.284351 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0902 11:04:59.037231 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0902 11:04:59.317989 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * W0902 11:04:59.623403 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.109] * I0902 11:04:59.629449 1 controller.go:606] quota admission added evaluator for: endpoints * I0902 11:04:59.660623 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0902 11:05:00.789443 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0902 11:05:01.720943 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0902 11:05:01.901894 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0902 11:05:07.687752 1 controller.go:606] quota admission added evaluator for: replicasets.apps * I0902 11:05:08.044219 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps * I0902 11:05:09.112425 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * * ==> kube-controller-manager [d2cf966a8b06] <== * I0902 11:05:06.196791 1 daemon_controller.go:257] Starting daemon sets controller * I0902 11:05:06.197077 1 shared_informer.go:223] Waiting for caches to sync for daemon sets * I0902 11:05:06.446138 1 controllermanager.go:533] Started "tokencleaner" * I0902 11:05:06.446255 1 tokencleaner.go:118] Starting token cleaner controller * I0902 11:05:06.446266 1 shared_informer.go:223] Waiting for caches to sync for token_cleaner * I0902 11:05:06.446274 1 shared_informer.go:230] Caches are synced for token_cleaner * I0902 11:05:06.696640 1 controllermanager.go:533] Started "endpointslice" * I0902 11:05:06.696801 1 endpointslice_controller.go:213] Starting endpoint slice controller * I0902 11:05:06.696815 1 shared_informer.go:223] Waiting for caches to sync for endpoint_slice * I0902 11:05:06.935375 1 controllermanager.go:533] Started "podgc" * I0902 11:05:06.936301 1 gc_controller.go:89] Starting GC controller * I0902 11:05:06.936420 1 shared_informer.go:223] Waiting for caches to sync for GC * I0902 11:05:07.189808 1 controllermanager.go:533] Started "serviceaccount" * I0902 11:05:07.189994 1 serviceaccounts_controller.go:117] Starting service account controller * I0902 11:05:07.190005 1 shared_informer.go:223] Waiting for caches to sync for service account * I0902 11:05:07.443288 1 controllermanager.go:533] Started "pvc-protection" * I0902 11:05:07.443558 1 pvc_protection_controller.go:101] Starting PVC protection controller * I0902 11:05:07.445630 1 shared_informer.go:223] Waiting for caches to sync for PVC protection * I0902 11:05:07.452164 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0902 11:05:07.492169 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0902 11:05:07.572030 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0902 11:05:07.581084 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0902 11:05:07.584224 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0902 11:05:07.601471 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0902 11:05:07.604745 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * I0902 11:05:07.618015 1 shared_informer.go:230] Caches are synced for HPA * I0902 11:05:07.646470 1 shared_informer.go:230] Caches are synced for PVC protection * I0902 11:05:07.646530 1 shared_informer.go:230] Caches are synced for PV protection * W0902 11:05:07.654993 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0902 11:05:07.666316 1 shared_informer.go:230] Caches are synced for deployment * I0902 11:05:07.674730 1 shared_informer.go:230] Caches are synced for namespace * I0902 11:05:07.679445 1 shared_informer.go:230] Caches are synced for ReplicationController * I0902 11:05:07.686460 1 shared_informer.go:230] Caches are synced for expand * I0902 11:05:07.693154 1 shared_informer.go:230] Caches are synced for service account * I0902 11:05:07.693307 1 shared_informer.go:230] Caches are synced for disruption * I0902 11:05:07.693321 1 disruption.go:339] Sending events to api server. * I0902 11:05:07.704250 1 shared_informer.go:230] Caches are synced for stateful set * I0902 11:05:07.704376 1 shared_informer.go:230] Caches are synced for TTL * I0902 11:05:07.709801 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"85334cbc-573a-47e8-8e65-e337b9be32eb", APIVersion:"apps/v1", ResourceVersion:"239", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1 * I0902 11:05:07.723758 1 shared_informer.go:230] Caches are synced for persistent volume * I0902 11:05:07.739097 1 shared_informer.go:230] Caches are synced for GC * I0902 11:05:07.748408 1 shared_informer.go:230] Caches are synced for attach detach * I0902 11:05:07.789950 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"269c786f-a425-4dec-a5d0-beea17872c47", APIVersion:"apps/v1", ResourceVersion:"303", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-p2qvt * I0902 11:05:07.796308 1 shared_informer.go:230] Caches are synced for job * I0902 11:05:07.883789 1 shared_informer.go:230] Caches are synced for endpoint * I0902 11:05:07.897621 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0902 11:05:07.940542 1 shared_informer.go:230] Caches are synced for taint * I0902 11:05:07.940731 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0902 11:05:07.941036 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0902 11:05:07.941385 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode. * I0902 11:05:07.946458 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0902 11:05:07.946880 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"67c792d7-48b2-4da0-8f30-70b6f4cc0297", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0902 11:05:07.999022 1 shared_informer.go:230] Caches are synced for daemon sets * I0902 11:05:08.058735 1 shared_informer.go:230] Caches are synced for resource quota * I0902 11:05:08.059543 1 shared_informer.go:230] Caches are synced for garbage collector * I0902 11:05:08.059675 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0902 11:05:08.097527 1 shared_informer.go:230] Caches are synced for resource quota * I0902 11:05:08.166910 1 shared_informer.go:230] Caches are synced for garbage collector * I0902 11:05:08.255585 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"af222e49-711c-47a4-ad03-d2eac8002526", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-g7f9s * I0902 11:05:22.944995 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. * * ==> kube-proxy [92249ee70f87] <== * W0902 11:05:16.173768 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0902 11:05:16.203146 1 node.go:136] Successfully retrieved node IP: 192.168.99.109 * I0902 11:05:16.203211 1 server_others.go:186] Using iptables Proxier. * W0902 11:05:16.203226 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0902 11:05:16.203262 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0902 11:05:16.203838 1 server.go:583] Version: v1.18.3 * I0902 11:05:16.204437 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0902 11:05:16.204495 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0902 11:05:16.210211 1 conntrack.go:83] Setting conntrack hashsize to 32768 * I0902 11:05:16.224898 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0902 11:05:16.225011 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0902 11:05:16.226038 1 config.go:315] Starting service config controller * I0902 11:05:16.226049 1 shared_informer.go:223] Waiting for caches to sync for service config * I0902 11:05:16.226099 1 config.go:133] Starting endpoints config controller * I0902 11:05:16.226111 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0902 11:05:16.328408 1 shared_informer.go:230] Caches are synced for endpoints config * I0902 11:05:16.328508 1 shared_informer.go:230] Caches are synced for service config * * ==> kube-scheduler [b8242fe570b1] <== * I0902 11:04:43.943260 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0902 11:04:43.943363 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0902 11:04:45.692030 1 serving.go:313] Generated self-signed cert in-memory * W0902 11:04:56.422795 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0902 11:04:56.422922 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0902 11:04:56.424479 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0902 11:04:56.424492 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0902 11:04:56.468633 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0902 11:04:56.468759 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0902 11:04:56.475429 1 authorization.go:47] Authorization is disabled * W0902 11:04:56.478110 1 authentication.go:40] Authentication is disabled * I0902 11:04:56.478409 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0902 11:04:56.483152 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0902 11:04:56.483524 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0902 11:04:56.483718 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0902 11:04:56.483847 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0902 11:04:56.495515 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0902 11:04:56.497375 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0902 11:04:56.502521 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0902 11:04:56.505285 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0902 11:04:56.508009 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0902 11:04:56.508160 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0902 11:04:56.508275 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0902 11:04:56.508496 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0902 11:04:56.508608 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0902 11:04:57.312961 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0902 11:04:57.409200 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0902 11:04:57.467675 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0902 11:04:57.759218 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0902 11:04:57.833287 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0902 11:04:57.907662 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0902 11:04:57.930371 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0902 11:04:58.042942 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0902 11:04:58.080187 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * I0902 11:04:59.384797 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * E0902 11:05:04.486171 1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue * E0902 11:05:08.339859 1 factory.go:503] pod: kube-system/coredns-66bff467f8-p2qvt is already present in unschedulable queue * * ==> kubelet <== * -- Logs begin at Wed 2020-09-02 11:03:20 UTC, end at Wed 2020-09-02 11:10:57 UTC. -- * Sep 02 11:05:02 minikube kubelet[5644]: I0902 11:05:02.582831 5644 clientconn.go:933] ClientConn switching balancer to "pick_first" * Sep 02 11:05:02 minikube kubelet[5644]: I0902 11:05:02.582879 5644 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests * Sep 02 11:05:02 minikube kubelet[5644]: I0902 11:05:02.582922 5644 kubelet.go:317] Watching apiserver * Sep 02 11:05:08 minikube kubelet[5644]: E0902 11:05:08.926767 5644 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. * Sep 02 11:05:08 minikube kubelet[5644]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors * Sep 02 11:05:08 minikube kubelet[5644]: I0902 11:05:08.986983 5644 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.12, apiVersion: 1.40.0 * Sep 02 11:05:08 minikube kubelet[5644]: I0902 11:05:08.989684 5644 server.go:1125] Started kubelet * Sep 02 11:05:08 minikube kubelet[5644]: I0902 11:05:08.991748 5644 server.go:145] Starting to listen on 0.0.0.0:10250 * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.025819 5644 server.go:393] Adding debug handlers to kubelet server. * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.034802 5644 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.051925 5644 volume_manager.go:265] Starting Kubelet Volume Manager * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.057349 5644 desired_state_of_world_populator.go:139] Desired state populator starts to run * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.158986 5644 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.221792 5644 status_manager.go:158] Starting to sync pod status with apiserver * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.221868 5644 kubelet.go:1821] Starting kubelet main sync loop. * Sep 02 11:05:09 minikube kubelet[5644]: E0902 11:05:09.221981 5644 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] * Sep 02 11:05:09 minikube kubelet[5644]: E0902 11:05:09.323450 5644 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet * Sep 02 11:05:09 minikube kubelet[5644]: E0902 11:05:09.527123 5644 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.623292 5644 kubelet_node_status.go:70] Attempting to register node minikube * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.682876 5644 kubelet_node_status.go:112] Node minikube was previously registered * Sep 02 11:05:09 minikube kubelet[5644]: I0902 11:05:09.683567 5644 kubelet_node_status.go:73] Successfully registered node minikube * Sep 02 11:05:09 minikube kubelet[5644]: E0902 11:05:09.930254 5644 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.252031 5644 cpu_manager.go:184] [cpumanager] starting with none policy * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.252676 5644 cpu_manager.go:185] [cpumanager] reconciling every 10s * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.252899 5644 state_mem.go:36] [cpumanager] initializing new in-memory state store * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.253642 5644 state_mem.go:88] [cpumanager] updated default cpuset: "" * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.254126 5644 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.254514 5644 policy_none.go:43] [cpumanager] none policy: Start * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.257303 5644 plugin_manager.go:114] Starting Kubelet Plugin Manager * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.732870 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.739947 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.761822 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.802532 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-kubeconfig") pod "kube-controller-manager-minikube" (UID: "ba963bc1bff8609dc4fc4d359349c120") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.810489 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "ba963bc1bff8609dc4fc4d359349c120") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.810791 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/73ddd619ad9b2e4cd2065ac2dbef2883-etcd-certs") pod "etcd-minikube" (UID: "73ddd619ad9b2e4cd2065ac2dbef2883") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.815995 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/29df4837136a989117e666a87d7f3b34-ca-certs") pod "kube-apiserver-minikube" (UID: "29df4837136a989117e666a87d7f3b34") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.816146 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/29df4837136a989117e666a87d7f3b34-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "29df4837136a989117e666a87d7f3b34") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.816218 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-ca-certs") pod "kube-controller-manager-minikube" (UID: "ba963bc1bff8609dc4fc4d359349c120") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.816271 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-k8s-certs") pod "kube-controller-manager-minikube" (UID: "ba963bc1bff8609dc4fc4d359349c120") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.816300 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/73ddd619ad9b2e4cd2065ac2dbef2883-etcd-data") pod "etcd-minikube" (UID: "73ddd619ad9b2e4cd2065ac2dbef2883") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.816351 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/29df4837136a989117e666a87d7f3b34-k8s-certs") pod "kube-apiserver-minikube" (UID: "29df4837136a989117e666a87d7f3b34") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.816414 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/ba963bc1bff8609dc4fc4d359349c120-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "ba963bc1bff8609dc4fc4d359349c120") * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.828938 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.889981 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:10 minikube kubelet[5644]: I0902 11:05:10.940472 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcddbd0cc8c89e2cbf4de5d3cca8769f-kubeconfig") pod "kube-scheduler-minikube" (UID: "dcddbd0cc8c89e2cbf4de5d3cca8769f") * Sep 02 11:05:11 minikube kubelet[5644]: I0902 11:05:11.058461 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/819d4c0c-47d9-4793-b818-ceac4ba89c9e-xtables-lock") pod "kube-proxy-g7f9s" (UID: "819d4c0c-47d9-4793-b818-ceac4ba89c9e") * Sep 02 11:05:11 minikube kubelet[5644]: I0902 11:05:11.058714 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9mvlf" (UniqueName: "kubernetes.io/secret/819d4c0c-47d9-4793-b818-ceac4ba89c9e-kube-proxy-token-9mvlf") pod "kube-proxy-g7f9s" (UID: "819d4c0c-47d9-4793-b818-ceac4ba89c9e") * Sep 02 11:05:11 minikube kubelet[5644]: I0902 11:05:11.058772 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/819d4c0c-47d9-4793-b818-ceac4ba89c9e-kube-proxy") pod "kube-proxy-g7f9s" (UID: "819d4c0c-47d9-4793-b818-ceac4ba89c9e") * Sep 02 11:05:11 minikube kubelet[5644]: I0902 11:05:11.058815 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/819d4c0c-47d9-4793-b818-ceac4ba89c9e-lib-modules") pod "kube-proxy-g7f9s" (UID: "819d4c0c-47d9-4793-b818-ceac4ba89c9e") * Sep 02 11:05:11 minikube kubelet[5644]: I0902 11:05:11.058834 5644 reconciler.go:157] Reconciler: start to sync state * Sep 02 11:05:13 minikube kubelet[5644]: W0902 11:05:13.583103 5644 pod_container_deletor.go:77] Container "127b449001d4aa2b152f94839d9a14dc77edb7f741a07a4da41750c27e916f99" not found in pod's containers * Sep 02 11:05:23 minikube kubelet[5644]: I0902 11:05:23.331447 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:23 minikube kubelet[5644]: I0902 11:05:23.345356 5644 topology_manager.go:233] [topologymanager] Topology Admit Handler * Sep 02 11:05:23 minikube kubelet[5644]: I0902 11:05:23.514409 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d43b57a-9485-44fe-9657-7607a06aba83-config-volume") pod "coredns-66bff467f8-p2qvt" (UID: "7d43b57a-9485-44fe-9657-7607a06aba83") * Sep 02 11:05:23 minikube kubelet[5644]: I0902 11:05:23.514859 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-q4c48" (UniqueName: "kubernetes.io/secret/7d43b57a-9485-44fe-9657-7607a06aba83-coredns-token-q4c48") pod "coredns-66bff467f8-p2qvt" (UID: "7d43b57a-9485-44fe-9657-7607a06aba83") * Sep 02 11:05:23 minikube kubelet[5644]: I0902 11:05:23.514920 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f087a754-e660-4150-93cd-4fcfafb1d628-tmp") pod "storage-provisioner" (UID: "f087a754-e660-4150-93cd-4fcfafb1d628") * Sep 02 11:05:23 minikube kubelet[5644]: I0902 11:05:23.514975 5644 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-ltv9p" (UniqueName: "kubernetes.io/secret/f087a754-e660-4150-93cd-4fcfafb1d628-storage-provisioner-token-ltv9p") pod "storage-provisioner" (UID: "f087a754-e660-4150-93cd-4fcfafb1d628") * Sep 02 11:05:25 minikube kubelet[5644]: W0902 11:05:25.486826 5644 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-p2qvt through plugin: invalid network status for * Sep 02 11:05:25 minikube kubelet[5644]: W0902 11:05:25.490720 5644 pod_container_deletor.go:77] Container "a437225656ca6ba0762769f431c756be96c0322e7a3efe89a9108d771c1d6d13" not found in pod's containers * Sep 02 11:05:26 minikube kubelet[5644]: W0902 11:05:26.517853 5644 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-p2qvt through plugin: invalid network status for * * ==> storage-provisioner [8ab3858b6cdb] <== * I0902 11:05:26.015829 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... * I0902 11:05:26.058694 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath * I0902 11:05:26.075438 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_446fcb40-c531-45e6-a708-224f430582dc! * I0902 11:05:26.079160 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"670dcf67-b288-49bc-913f-d0e51b13791d", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_446fcb40-c531-45e6-a708-224f430582dc became leader * I0902 11:05:26.187587 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_446fcb40-c531-45e6-a708-224f430582dc! ```
@RA489
Copy link

RA489 commented Sep 3, 2020

/triage support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Sep 3, 2020
@tstromberg tstromberg changed the title To many log files in /tmp - loop in logging by translation file not found Turn post-mortem (glog) logs off by default Sep 15, 2020
@tstromberg tstromberg added kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/support Categorizes issue or PR as a support question. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Sep 15, 2020
@tstromberg tstromberg modified the milestones: v1.14.0, v1.15.0-candidate Sep 21, 2020
@tstromberg tstromberg changed the title Turn post-mortem (glog) logs off by default $TMPDIR fills with log files - change logging to create less inodes Sep 21, 2020
@tstromberg
Copy link
Contributor

I'm bumping up the priority on this, as apparently caused some issues with users that use 3rd party software that calls minikube status in a loop.

I looked into this some, and it seems that this behavior can't be changed with the current logging library in use: https://godoc.org/github.com/golang/glog

I attempted to migrate minikube to klog last year, but didn't seal the deal: #5318

My preferred approach would be to set the logging destination to something that naturally cleans up after itself, for instance, using a hash of ARGV: $TMPDIR/minikube_<argv[1]>_<hash>.log

@JohnCarmichael3000
Copy link

JohnCarmichael3000 commented Sep 25, 2020

My $TMPDIR was also filling up with minikube log files which led me to this issue. I was finding around 30 new "minikube" log files per minute in my temp folder:

filename example:
minikube.exe.myDeskTopName.myDeskTopName_myUserName.log.WARNING.20200924-153603

file content example:
Log file created at: 2020/09/24 15:36:03
Running on machine: myDeskTopName
Binary: Built with gc go1.14.6 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg

At first, naturally, I blamed minikube seeing as each log file name started with "minikube.exe". However that blame was misplaced as I subsequently traced the creation of these files to third party software. It was Google's Cloud Code extension installed in my Visual Studio Code editor that was the process actually creating these logs and doesn't identify itself in the logs.

Once I removed the extension then these log files immediately stopped being generated. Link:
https://marketplace.visualstudio.com/items?itemName=GoogleCloudTools.cloudcode.

So problems with 3rd party software and minikube.

@PriyaModali
Copy link
Contributor

/assign

@medyagh
Copy link
Member

medyagh commented Sep 30, 2020

thank you @JohnCarmichael3000 for the details, we will be moving to klog that will have a better experience with logs files

@medyagh
Copy link
Member

medyagh commented Nov 30, 2020

we need to ensure our customers are using --log_dir=""

@medyagh
Copy link
Member

medyagh commented Nov 30, 2020

we also need to verify --log_dir is working,

@medyagh medyagh changed the title $TMPDIR fills with log files - change logging to create less inodes $TMPDIR fills with log files - change logging to create less inodes (add renetion) Nov 30, 2020
@medyagh medyagh changed the title $TMPDIR fills with log files - change logging to create less inodes (add renetion) $TMPDIR fills with log files - change logging to create less inodes (add retention) Nov 30, 2020
@medyagh medyagh modified the milestones: v1.16.0, v1.17.0-candidate Nov 30, 2020
@ilya-zuyev
Copy link
Contributor

todo: investigate whether it's possible to use log append mode with klog to reuse the existing log file

@medyagh
Copy link
Member

medyagh commented Dec 21, 2020

@limberger do u mind checking the latest version of minikube ? we fixed the -log_dir flag, that would help with fixing this issue

@limberger
Copy link
Author

I stoped to use minikube in the environment where this issue was ocurring. I can't check if is fixed! In my minikube on my fisical machine this issue never happened!

@medyagh
Copy link
Member

medyagh commented Jan 11, 2021

we fixed --log_dir flag, so if u specify an empty log dir it will not create temp files, however u are right by default minikube should not create multiple temp files... this shoiuld be fixed

@medyagh
Copy link
Member

medyagh commented Jan 11, 2021

cc @spowelljr

@medyagh
Copy link
Member

medyagh commented Feb 2, 2021

/assign @bharathkkb

@k8s-ci-robot
Copy link
Contributor

@medyagh: GitHub didn't allow me to assign the following users: bharathkkb.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @bharathkkb

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bharathkkb
Copy link
Contributor

/assign bharathkkb

@medyagh
Copy link
Member

medyagh commented Feb 8, 2021

@bharathkkb are u still working on this ? any updates?

@bharathkkb
Copy link
Contributor

bharathkkb commented Feb 8, 2021

Hi @medyagh
I was reproing this today (v1.17.1) and it looks like with --log_dir set

without --log_dir set

  • it creates new info/warning logfiles in $TMPDIR
  • calling minikube status spawns new log files in $TMPDIR

IIUC the ask is to prevent the new files being created for each binary command and continue to append to the initially created set of logfiles?

@prezha
Copy link
Contributor

prezha commented Feb 19, 2021

@bharathkkb i think that --log_dir flag works just as you've described - by design:

-log_dir=""
Log files will be written to this directory instead of the default temporary directory.

if all logs should be appended to a single file, we've implemented --log_file=<path> flag for that (i think shell expansion with eg '~` does not work here, so the full path should be given)

example:

❯ minikube start --log_file=/home/prezha/tmp/minikube.log
😄  minikube v1.17.1 on Opensuse-Tumbleweed
✨  Automatically selected the docker driver
❗  docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=16000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
❯ minikube status --log_file=/home/prezha/tmp/minikube.log
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent
❯ minikube status --log_file=/home/prezha/tmp/minikube.log
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent
❯ cat /home/prezha/tmp/minikube.log
Log file created at: 2021/02/19 03:26:32
Running on machine: codex
Binary: Built with gc go1.16 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0219 03:26:32.843834  543369 out.go:229] Setting OutFile to fd 1 ...
...
I0219 03:27:24.110049  543369 start.go:207] waiting for startup goroutines ...
I0219 03:27:24.140253  543369 start.go:455] kubectl: 1.20.4, cluster: 1.20.2 (minor skew: 0)
I0219 03:27:24.143240  543369 out.go:119] �  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Log file created at: 2021/02/19 03:31:25
Running on machine: codex
Binary: Built with gc go1.16 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Log file created at: 2021/02/19 03:31:39
Running on machine: codex
Binary: Built with gc go1.16 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg

also, you can use something like:

❯ minikube status --log_file=/home/prezha/tmp/minikube-`date +"%Y-%m-%d"`.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.