Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

passing --memory=8g --driver virtualbox does not change memory for vm #9254

Closed
woodcockjosh opened this issue Sep 15, 2020 · 3 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@woodcockjosh
Copy link
Contributor

Steps to reproduce the issue:

  1. minikube start -p my-cluster --memory 8g --driver virtualbox

Full output of failed command:

🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ..

Full output of minikube start command used, if not already included:

Spacees-MacBook-Pro:k8s josh$ minikube start -p my-cluster --memory=16gb
😄  [my-cluster] minikube v1.12.3 on Darwin 10.15.6
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node my-cluster in cluster my-cluster
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.16.6 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "my-cluster"

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Tue 2020-09-15 11:39:53 UTC, end at Tue 2020-09-15 11:44:01 UTC. -- Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748034681Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748100440Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748162792Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748426192Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748448606Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748505247Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748514790Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748521010Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748526935Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748532506Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748538679Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748544393Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748550072Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748557265Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748585192Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748592789Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748600137Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748606741Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748666181Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748698960Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.748705326Z" level=info msg="containerd successfully booted in 0.005297s" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755113587Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755134278Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755146020Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755152738Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755825885Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755845685Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755857107Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.755864011Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773557864Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773579429Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773585950Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773589936Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773593649Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773597195Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.773750888Z" level=info msg="Loading containers: start." Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.820522319Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.853561145Z" level=info msg="Loading containers: done." Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.866026536Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12 Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.871002234Z" level=info msg="Daemon has completed initialization" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.882919881Z" level=info msg="API listen on /var/run/docker.sock" Sep 15 11:40:06 my-cluster dockerd[2393]: time="2020-09-15T11:40:06.882966315Z" level=info msg="API listen on [::]:2376" Sep 15 11:40:06 my-cluster systemd[1]: Started Docker Application Container Engine. Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.239615549Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4e50dd45488ef1ab58a916461996fcdd0ff6679daf3b2a22fbf2b6dcbe174733/shim.sock" debug=false pid=3695 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.240552242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/79463fad5715ef3bc483310c2d0e0fd6f9d16f060e2d64501dd6f9b55ccf0fe2/shim.sock" debug=false pid=3700 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.240731987Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1fa499eeeeadc85b3dc30dbce4db17e24480168116209cb793fee550ca5d8d9a/shim.sock" debug=false pid=3696 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.243484770Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/27d6b92ef63c36cec8f7fe7e209152c6aa0a6e09170d774fbd7dd2d855264d48/shim.sock" debug=false pid=3719 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.435717294Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c2281b58d19efd5a96efa05ba2642e4842b857c6187715a99aafb921143660a7/shim.sock" debug=false pid=3840 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.518909207Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4cffca0b8f2730e208adb3fd1f1666644fa650022b2f9417dad47c4561039aa0/shim.sock" debug=false pid=3874 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.520290459Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/74707db23bd95cd881d7f8fcaddc9e35bf3ba2c1f31d94df49eb02f59b5c1779/shim.sock" debug=false pid=3875 Sep 15 11:40:36 my-cluster dockerd[2393]: time="2020-09-15T11:40:36.527961444Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b0131d3d02f76d7226a922cee699b7a225c8d2040800edb5fb39305b34973687/shim.sock" debug=false pid=3893 Sep 15 11:40:50 my-cluster dockerd[2393]: time="2020-09-15T11:40:50.019666044Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec26dc1ce55043ede7943b37664b5038e10330eb6fdc379015a7d31065e240dc/shim.sock" debug=false pid=4291 Sep 15 11:40:50 my-cluster dockerd[2393]: time="2020-09-15T11:40:50.058769105Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4d43bf348abfe8b2b77f1ce0f427eafedf9def82e40e90b74dc728a24069765c/shim.sock" debug=false pid=4305 Sep 15 11:40:50 my-cluster dockerd[2393]: time="2020-09-15T11:40:50.087751187Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/df6acc1c47ed6081ae2149ca59536424ebd4d45da0b75fb663fd45e875601525/shim.sock" debug=false pid=4320 Sep 15 11:40:50 my-cluster dockerd[2393]: time="2020-09-15T11:40:50.262022055Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5d70a2da1b6e45c01636bd55414e8677682bf93863d7284713f30824e581ed84/shim.sock" debug=false pid=4407 Sep 15 11:40:50 my-cluster dockerd[2393]: time="2020-09-15T11:40:50.320701818Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddc0c0687de9a5d22ab35ade03ee4169de0c2d6b39c928d023079cc56b29f447/shim.sock" debug=false pid=4435 Sep 15 11:40:50 my-cluster dockerd[2393]: time="2020-09-15T11:40:50.429547079Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/728b9615bf80abc69b2ea802561666a003d5ad951c0bc4bfbf3dd85ec7e7dc10/shim.sock" debug=false pid=4501 Sep 15 11:41:20 my-cluster dockerd[2393]: time="2020-09-15T11:41:20.485335233Z" level=info msg="shim reaped" id=5d70a2da1b6e45c01636bd55414e8677682bf93863d7284713f30824e581ed84 Sep 15 11:41:20 my-cluster dockerd[2393]: time="2020-09-15T11:41:20.495507640Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 15 11:41:20 my-cluster dockerd[2393]: time="2020-09-15T11:41:20.759256327Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/70b7e36e263cb24aa3244ab9a2608566a166db56864129716b4b844af580276f/shim.sock" debug=false pid=4744

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
70b7e36e263cb ac5e2ed5acc50 2 minutes ago Running storage-provisioner 1 ec26dc1ce5504
728b9615bf80a bf261d1579144 3 minutes ago Running coredns 0 df6acc1c47ed6
ddc0c0687de9a 284f0d3c94206 3 minutes ago Running kube-proxy 0 4d43bf348abfe
5d70a2da1b6e4 ac5e2ed5acc50 3 minutes ago Exited storage-provisioner 0 ec26dc1ce5504
b0131d3d02f76 cd48205a40f00 3 minutes ago Running kube-controller-manager 0 1fa499eeeeadc
4cffca0b8f273 b2756210eeabf 3 minutes ago Running etcd 0 27d6b92ef63c3
74707db23bd95 5732fe50f6f52 3 minutes ago Running kube-apiserver 0 4e50dd45488ef
c2281b58d19ef 6bed756ced733 3 minutes ago Running kube-scheduler 0 79463fad5715e

==> coredns [728b9615bf80] <==
.:53
2020-09-15T11:40:55.548Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2020-09-15T11:40:55.548Z [INFO] CoreDNS-1.6.2
2020-09-15T11:40:55.548Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2020-09-15T11:40:56.951Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2020-09-15T11:41:06.951Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2020-09-15T11:41:16.951Z [INFO] plugin/ready: Still waiting on: "kubernetes"
I0915 11:41:20.549032 1 trace.go:82] Trace[240545644]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-09-15 11:40:50.54815617 +0000 UTC m=+0.016585572) (total time: 30.000854001s):
Trace[240545644]: [30.000854001s] [30.000854001s] END
E0915 11:41:20.549050 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549050 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549050 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0915 11:41:20.549219 1 trace.go:82] Trace[187105362]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-09-15 11:40:50.547954289 +0000 UTC m=+0.016383681) (total time: 30.001255402s):
Trace[187105362]: [30.001255402s] [30.001255402s] END
E0915 11:41:20.549225 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549225 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549225 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0915 11:41:20.549256 1 trace.go:82] Trace[1143163359]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-09-15 11:40:50.547555436 +0000 UTC m=+0.015984828) (total time: 30.001682051s):
Trace[1143163359]: [30.001682051s] [30.001682051s] END
E0915 11:41:20.549268 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549268 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549268 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549050 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549225 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0915 11:41:20.549268 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> describe nodes <==
Name: my-cluster
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=my-cluster
kubernetes.io/os=linux
minikube.k8s.io/commit=2243b4b97c131e3244c5f014faedca0d846599f5
minikube.k8s.io/name=my-cluster
minikube.k8s.io/updated_at=2020_09_15T06_40_44_0700
minikube.k8s.io/version=v1.12.3
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 15 Sep 2020 11:40:40 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Tue, 15 Sep 2020 11:43:41 +0000 Tue, 15 Sep 2020 11:40:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 15 Sep 2020 11:43:41 +0000 Tue, 15 Sep 2020 11:40:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 15 Sep 2020 11:43:41 +0000 Tue, 15 Sep 2020 11:40:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 15 Sep 2020 11:43:41 +0000 Tue, 15 Sep 2020 11:40:36 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.99.103
Hostname: my-cluster
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 5952056Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 5952056Ki
pods: 110
System Info:
Machine ID: eef140315bc143428eba1dbe8798272d
System UUID: b8547f89-1ec9-de49-af03-1b8eb1bb39db
Boot ID: 3399abfb-2b97-41cb-b73b-338dc2d3b380
Kernel Version: 4.19.114
OS Image: Buildroot 2019.02.11
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.12
Kubelet Version: v1.16.6
Kube-Proxy Version: v1.16.6
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-5644d7b6d9-882h8 100m (5%) 0 (0%) 70Mi (1%) 170Mi (2%) 3m13s
kube-system etcd-my-cluster 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m22s
kube-system kube-apiserver-my-cluster 250m (12%) 0 (0%) 0 (0%) 0 (0%) 2m23s
kube-system kube-controller-manager-my-cluster 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m22s
kube-system kube-proxy-jbmx2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m13s
kube-system kube-scheduler-my-cluster 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m21s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m18s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 650m (32%) 0 (0%)
memory 70Mi (1%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 3m27s (x8 over 3m27s) kubelet, my-cluster Node my-cluster status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m27s (x8 over 3m27s) kubelet, my-cluster Node my-cluster status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m27s (x7 over 3m27s) kubelet, my-cluster Node my-cluster status is now: NodeHasSufficientPID
Normal Starting 3m12s kube-proxy, my-cluster Starting kube-proxy.

==> dmesg <==
[ +0.000064] 00:00:00.002558 main 5.2.42 r137960 started. Verbose level = 0
[ +0.422367] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +4.608438] hpet1: lost 286 rtc interrupts
[Sep15 11:40] hpet1: lost 319 rtc interrupts
[ +3.449943] systemd-fstab-generator[2373]: Ignoring "noauto" for root device
[ +0.065259] systemd-fstab-generator[2383]: Ignoring "noauto" for root device
[ +0.991276] systemd-fstab-generator[2567]: Ignoring "noauto" for root device
[ +0.494939] hpet_rtc_timer_reinit: 66 callbacks suppressed
[ +0.000009] hpet1: lost 318 rtc interrupts
[ +5.001770] hpet1: lost 318 rtc interrupts
[ +5.000745] hpet1: lost 318 rtc interrupts
[ +4.999975] hpet1: lost 318 rtc interrupts
[ +3.577921] systemd-fstab-generator[3020]: Ignoring "noauto" for root device
[ +1.000549] systemd-fstab-generator[3224]: Ignoring "noauto" for root device
[ +0.422334] hpet1: lost 318 rtc interrupts
[ +5.001242] hpet1: lost 318 rtc interrupts
[ +5.001362] hpet_rtc_timer_reinit: 24 callbacks suppressed
[ +0.000009] hpet1: lost 318 rtc interrupts
[ +5.000995] hpet1: lost 318 rtc interrupts
[ +5.000707] hpet1: lost 318 rtc interrupts
[ +10.002072] hpet_rtc_timer_reinit: 34 callbacks suppressed
[ +0.000010] hpet1: lost 318 rtc interrupts
[Sep15 11:41] hpet1: lost 318 rtc interrupts
[ +5.001249] hpet1: lost 318 rtc interrupts
[ +5.000375] hpet1: lost 318 rtc interrupts
[ +5.002064] hpet1: lost 319 rtc interrupts
[ +8.828591] kauditd_printk_skb: 4 callbacks suppressed
[ +1.173510] hpet1: lost 318 rtc interrupts
[ +5.000403] hpet1: lost 318 rtc interrupts
[ +5.002117] hpet1: lost 318 rtc interrupts
[ +5.000466] hpet1: lost 318 rtc interrupts
[ +5.001932] hpet1: lost 318 rtc interrupts
[ +5.000411] hpet1: lost 318 rtc interrupts
[ +4.272595] NFSD: Unable to end grace period: -110
[ +0.729112] hpet1: lost 318 rtc interrupts
[Sep15 11:42] hpet1: lost 318 rtc interrupts
[ +5.001100] hpet1: lost 318 rtc interrupts
[ +5.001164] hpet1: lost 318 rtc interrupts
[ +5.001057] hpet1: lost 318 rtc interrupts
[ +5.001634] hpet1: lost 319 rtc interrupts
[ +5.001793] hpet1: lost 318 rtc interrupts
[ +5.000431] hpet1: lost 318 rtc interrupts
[ +5.001279] hpet1: lost 318 rtc interrupts
[ +5.000962] hpet1: lost 318 rtc interrupts
[ +5.001525] hpet1: lost 318 rtc interrupts
[ +5.000492] hpet1: lost 318 rtc interrupts
[ +5.000656] hpet1: lost 318 rtc interrupts
[Sep15 11:43] hpet1: lost 318 rtc interrupts
[ +5.001366] hpet1: lost 318 rtc interrupts
[ +5.001901] hpet1: lost 318 rtc interrupts
[ +5.001179] hpet1: lost 318 rtc interrupts
[ +5.000750] hpet1: lost 318 rtc interrupts
[ +5.001570] hpet1: lost 319 rtc interrupts
[ +5.002095] hpet1: lost 318 rtc interrupts
[ +5.000633] hpet1: lost 319 rtc interrupts
[ +5.001608] hpet1: lost 318 rtc interrupts
[ +5.001417] hpet1: lost 318 rtc interrupts
[ +5.000525] hpet1: lost 318 rtc interrupts
[ +5.001720] hpet1: lost 318 rtc interrupts
[Sep15 11:44] hpet1: lost 318 rtc interrupts

==> etcd [4cffca0b8f27] <==
2020-09-15 11:40:37.140828 I | etcdmain: etcd Version: 3.3.15
2020-09-15 11:40:37.140870 I | etcdmain: Git SHA: 94745a4ee
2020-09-15 11:40:37.140873 I | etcdmain: Go Version: go1.12.9
2020-09-15 11:40:37.140875 I | etcdmain: Go OS/Arch: linux/amd64
2020-09-15 11:40:37.140878 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-09-15 11:40:37.141045 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-09-15 11:40:37.141765 I | embed: listening for peers on https://192.168.99.103:2380
2020-09-15 11:40:37.141799 I | embed: listening for client requests on 127.0.0.1:2379
2020-09-15 11:40:37.141812 I | embed: listening for client requests on 192.168.99.103:2379
2020-09-15 11:40:37.143397 I | etcdserver: name = my-cluster
2020-09-15 11:40:37.143438 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-09-15 11:40:37.143442 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-09-15 11:40:37.143447 I | etcdserver: heartbeat = 100ms
2020-09-15 11:40:37.143449 I | etcdserver: election = 1000ms
2020-09-15 11:40:37.143451 I | etcdserver: snapshot count = 10000
2020-09-15 11:40:37.143456 I | etcdserver: advertise client URLs = https://192.168.99.103:2379
2020-09-15 11:40:37.143458 I | etcdserver: initial advertise peer URLs = https://192.168.99.103:2380
2020-09-15 11:40:37.143462 I | etcdserver: initial cluster = my-cluster=https://192.168.99.103:2380
2020-09-15 11:40:37.242410 I | etcdserver: starting member bd07d860304161f8 in cluster 6ffb9f662ec1e33b
2020-09-15 11:40:37.242427 I | raft: bd07d860304161f8 became follower at term 0
2020-09-15 11:40:37.279287 I | raft: newRaft bd07d860304161f8 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-09-15 11:40:37.288518 I | raft: bd07d860304161f8 became follower at term 1
2020-09-15 11:40:37.522454 W | auth: simple token is not cryptographically signed
2020-09-15 11:40:37.648871 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2020-09-15 11:40:37.652181 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-09-15 11:40:37.652596 I | embed: listening for metrics on http://192.168.99.103:2381
2020-09-15 11:40:37.653061 I | etcdserver: bd07d860304161f8 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-09-15 11:40:37.653098 W | raft: bd07d860304161f8 cannot campaign at term 1 since there are still 1 pending configuration changes to apply
2020-09-15 11:40:37.654164 I | embed: listening for metrics on http://127.0.0.1:2381
2020-09-15 11:40:37.654640 I | etcdserver/membership: added member bd07d860304161f8 [https://192.168.99.103:2380] to cluster 6ffb9f662ec1e33b
2020-09-15 11:40:38.589183 I | raft: bd07d860304161f8 is starting a new election at term 1
2020-09-15 11:40:38.589279 I | raft: bd07d860304161f8 became candidate at term 2
2020-09-15 11:40:38.589293 I | raft: bd07d860304161f8 received MsgVoteResp from bd07d860304161f8 at term 2
2020-09-15 11:40:38.589304 I | raft: bd07d860304161f8 became leader at term 2
2020-09-15 11:40:38.589310 I | raft: raft.node: bd07d860304161f8 elected leader bd07d860304161f8 at term 2
2020-09-15 11:40:38.589739 I | etcdserver: published {Name:my-cluster ClientURLs:[https://192.168.99.103:2379]} to cluster 6ffb9f662ec1e33b
2020-09-15 11:40:38.589916 I | embed: ready to serve client requests
2020-09-15 11:40:38.591113 I | embed: serving client requests on 192.168.99.103:2379
2020-09-15 11:40:38.591189 I | etcdserver: setting up the initial cluster version to 3.3
2020-09-15 11:40:38.591424 I | embed: ready to serve client requests
2020-09-15 11:40:38.591638 N | etcdserver/membership: set the initial cluster version to 3.3
2020-09-15 11:40:38.591678 I | etcdserver/api: enabled capabilities for version 3.3
2020-09-15 11:40:38.592343 I | embed: serving client requests on 127.0.0.1:2379

==> kernel <==
11:44:04 up 4 min, 0 users, load average: 0.27, 0.50, 0.26
Linux my-cluster 4.19.114 #1 SMP Mon Aug 3 12:35:22 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.11"

==> kube-apiserver [74707db23bd9] <==
I0915 11:40:39.077571 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0915 11:40:39.092407 1 client.go:357] parsed scheme: "endpoint"
I0915 11:40:39.092623 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0915 11:40:39.104757 1 client.go:357] parsed scheme: "endpoint"
I0915 11:40:39.104866 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0915 11:40:39.112499 1 client.go:357] parsed scheme: "endpoint"
I0915 11:40:39.112585 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0915 11:40:39.119883 1 client.go:357] parsed scheme: "endpoint"
I0915 11:40:39.120012 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W0915 11:40:39.210148 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0915 11:40:39.222540 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0915 11:40:39.234616 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0915 11:40:39.236801 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0915 11:40:39.244106 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0915 11:40:39.256537 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0915 11:40:39.256559 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0915 11:40:39.263135 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0915 11:40:39.263142 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0915 11:40:39.264459 1 client.go:357] parsed scheme: "endpoint"
I0915 11:40:39.264483 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0915 11:40:39.270344 1 client.go:357] parsed scheme: "endpoint"
I0915 11:40:39.270410 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0915 11:40:40.798141 1 secure_serving.go:123] Serving securely on [::]:8443
I0915 11:40:40.799822 1 crd_finalizer.go:274] Starting CRDFinalizer
I0915 11:40:40.799961 1 autoregister_controller.go:140] Starting autoregister controller
I0915 11:40:40.799976 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0915 11:40:40.799989 1 controller.go:81] Starting OpenAPI AggregationController
I0915 11:40:40.803864 1 controller.go:85] Starting OpenAPI controller
I0915 11:40:40.803950 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0915 11:40:40.803963 1 naming_controller.go:288] Starting NamingConditionController
I0915 11:40:40.803971 1 establishing_controller.go:73] Starting EstablishingController
I0915 11:40:40.803978 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0915 11:40:40.803986 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0915 11:40:40.803995 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0915 11:40:40.803999 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0915 11:40:40.805544 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0915 11:40:40.805629 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0915 11:40:40.805731 1 available_controller.go:383] Starting AvailableConditionController
I0915 11:40:40.805767 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
E0915 11:40:40.867110 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.103, ResourceVersion: 0, AdditionalErrorMsg:
I0915 11:40:40.900041 1 cache.go:39] Caches are synced for autoregister controller
I0915 11:40:40.905454 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0915 11:40:40.905874 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0915 11:40:40.906006 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0915 11:40:41.798428 1 controller.go:107] OpenAPI AggregationController: Processing item
I0915 11:40:41.798687 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0915 11:40:41.798960 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0915 11:40:41.804000 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0915 11:40:41.809580 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0915 11:40:41.809633 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0915 11:40:42.048746 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0915 11:40:42.070619 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0915 11:40:42.136410 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.103]
I0915 11:40:42.136883 1 controller.go:606] quota admission added evaluator for: endpoints
I0915 11:40:43.095961 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0915 11:40:43.660608 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0915 11:40:44.005661 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0915 11:40:44.161416 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0915 11:40:49.641327 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0915 11:40:49.654949 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [b0131d3d02f7] <==
I0915 11:40:48.635097 1 controllermanager.go:534] Started "replicationcontroller"
I0915 11:40:48.635340 1 replica_set.go:182] Starting replicationcontroller controller
I0915 11:40:48.635450 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
I0915 11:40:48.885551 1 controllermanager.go:534] Started "job"
I0915 11:40:48.885886 1 job_controller.go:143] Starting job controller
I0915 11:40:48.886007 1 shared_informer.go:197] Waiting for caches to sync for job
I0915 11:40:49.036313 1 controllermanager.go:534] Started "csrsigning"
W0915 11:40:49.036425 1 controllermanager.go:513] "endpointslice" is disabled
I0915 11:40:49.036352 1 certificate_controller.go:113] Starting certificate controller
I0915 11:40:49.036545 1 shared_informer.go:197] Waiting for caches to sync for certificate
I0915 11:40:49.293561 1 controllermanager.go:534] Started "namespace"
I0915 11:40:49.293801 1 namespace_controller.go:186] Starting namespace controller
I0915 11:40:49.293980 1 shared_informer.go:197] Waiting for caches to sync for namespace
I0915 11:40:49.537018 1 controllermanager.go:534] Started "daemonset"
I0915 11:40:49.537150 1 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0915 11:40:49.537348 1 controllermanager.go:526] Skipping "route"
I0915 11:40:49.537783 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0915 11:40:49.537948 1 daemon_controller.go:267] Starting daemon sets controller
I0915 11:40:49.538089 1 shared_informer.go:197] Waiting for caches to sync for daemon sets
I0915 11:40:49.548094 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0915 11:40:49.559832 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="my-cluster" does not exist
I0915 11:40:49.561988 1 shared_informer.go:204] Caches are synced for HPA
I0915 11:40:49.585499 1 shared_informer.go:204] Caches are synced for certificate
I0915 11:40:49.585599 1 shared_informer.go:204] Caches are synced for GC
I0915 11:40:49.585913 1 shared_informer.go:204] Caches are synced for PVC protection
I0915 11:40:49.585924 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0915 11:40:49.585993 1 shared_informer.go:204] Caches are synced for service account
I0915 11:40:49.586000 1 shared_informer.go:204] Caches are synced for taint
I0915 11:40:49.590512 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W0915 11:40:49.590682 1 node_lifecycle_controller.go:903] Missing timestamp for Node my-cluster. Assuming now as a timestamp.
I0915 11:40:49.590805 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I0915 11:40:49.590982 1 event.go:274] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"3941175c-a965-4de3-bcd4-59f7eb25742d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node my-cluster event: Registered Node my-cluster in Controller
I0915 11:40:49.591416 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0915 11:40:49.599598 1 shared_informer.go:204] Caches are synced for namespace
I0915 11:40:49.604049 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0915 11:40:49.635416 1 shared_informer.go:204] Caches are synced for deployment
I0915 11:40:49.635652 1 shared_informer.go:204] Caches are synced for ReplicationController
I0915 11:40:49.635748 1 shared_informer.go:204] Caches are synced for stateful set
I0915 11:40:49.636208 1 shared_informer.go:204] Caches are synced for TTL
I0915 11:40:49.636903 1 shared_informer.go:204] Caches are synced for certificate
I0915 11:40:49.647875 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0915 11:40:49.648227 1 shared_informer.go:204] Caches are synced for daemon sets
I0915 11:40:49.653588 1 log.go:172] [INFO] signed certificate with serial number 95765727313505264710504695642140317020828184822
E0915 11:40:49.670151 1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0915 11:40:49.670663 1 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"7895388d-56cf-4a72-88de-93fe6cdea487", APIVersion:"apps/v1", ResourceVersion:"220", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
I0915 11:40:49.683440 1 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"b96bd66c-778d-4067-8212-488276ddb5b0", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-jbmx2
I0915 11:40:49.697247 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"9458c351-348a-4f7d-b6e2-5f8672692717", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-882h8
I0915 11:40:49.886317 1 shared_informer.go:204] Caches are synced for job
I0915 11:40:49.937010 1 shared_informer.go:204] Caches are synced for expand
I0915 11:40:49.937103 1 shared_informer.go:204] Caches are synced for PV protection
I0915 11:40:49.937314 1 shared_informer.go:204] Caches are synced for attach detach
I0915 11:40:49.987476 1 shared_informer.go:204] Caches are synced for persistent volume
I0915 11:40:50.035717 1 shared_informer.go:204] Caches are synced for endpoint
I0915 11:40:50.038426 1 shared_informer.go:204] Caches are synced for resource quota
I0915 11:40:50.038600 1 shared_informer.go:204] Caches are synced for resource quota
I0915 11:40:50.057516 1 shared_informer.go:204] Caches are synced for disruption
I0915 11:40:50.057528 1 disruption.go:338] Sending events to api server.
I0915 11:40:50.140259 1 shared_informer.go:204] Caches are synced for garbage collector
I0915 11:40:50.140271 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0915 11:40:50.148408 1 shared_informer.go:204] Caches are synced for garbage collector

==> kube-proxy [ddc0c0687de9] <==
W0915 11:40:50.557718 1 server_others.go:330] Flag proxy-mode="" unknown, assuming iptables proxy
I0915 11:40:50.562834 1 node.go:135] Successfully retrieved node IP: 192.168.99.103
I0915 11:40:50.562856 1 server_others.go:150] Using iptables Proxier.
W0915 11:40:50.562905 1 proxier.go:282] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0915 11:40:50.563071 1 server.go:529] Version: v1.16.6
I0915 11:40:50.563326 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0915 11:40:50.563339 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0915 11:40:50.563367 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0915 11:40:50.563388 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0915 11:40:50.563757 1 config.go:313] Starting service config controller
I0915 11:40:50.563772 1 shared_informer.go:197] Waiting for caches to sync for service config
I0915 11:40:50.564087 1 config.go:131] Starting endpoints config controller
I0915 11:40:50.564104 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0915 11:40:50.664247 1 shared_informer.go:204] Caches are synced for endpoints config
I0915 11:40:50.664391 1 shared_informer.go:204] Caches are synced for service config

==> kube-scheduler [c2281b58d19e] <==
I0915 11:40:36.911935 1 serving.go:319] Generated self-signed cert in-memory
W0915 11:40:40.876122 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0915 11:40:40.876145 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0915 11:40:40.876152 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W0915 11:40:40.876156 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0915 11:40:40.880587 1 server.go:148] Version: v1.16.6
I0915 11:40:40.884086 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0915 11:40:40.895347 1 authorization.go:47] Authorization is disabled
W0915 11:40:40.895368 1 authentication.go:79] Authentication is disabled
I0915 11:40:40.895376 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0915 11:40:40.896215 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E0915 11:40:40.898790 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0915 11:40:40.898847 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0915 11:40:40.899150 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 11:40:40.899174 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 11:40:40.899519 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 11:40:40.899563 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 11:40:40.900043 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 11:40:40.900111 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 11:40:40.900371 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 11:40:40.901670 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 11:40:40.901714 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 11:40:41.900637 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0915 11:40:41.902873 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0915 11:40:41.904324 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0915 11:40:41.905519 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0915 11:40:41.906559 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0915 11:40:41.907748 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0915 11:40:41.909846 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0915 11:40:41.911063 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0915 11:40:41.913028 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0915 11:40:41.914368 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0915 11:40:41.915403 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0915 11:40:44.529742 1 factory.go:585] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Tue 2020-09-15 11:39:53 UTC, end at Tue 2020-09-15 11:44:06 UTC. --
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.255152 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.355993 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.457383 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.558706 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.658910 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.759355 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.859555 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:38 my-cluster kubelet[3257]: E0915 11:40:38.959844 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.060464 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.161469 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.261691 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.362319 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.462664 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.563374 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.664246 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.765410 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.865786 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:39 my-cluster kubelet[3257]: E0915 11:40:39.966065 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.066412 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.167589 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.267895 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.368527 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.468858 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.569158 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.669639 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.769896 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.870412 3257 kubelet.go:2267] node "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.888291 3257 controller.go:220] failed to get node "my-cluster" when trying to set owner ref to the node lease: nodes "my-cluster" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: E0915 11:40:40.956784 3257 controller.go:135] failed to ensure node lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found
Sep 15 11:40:40 my-cluster kubelet[3257]: I0915 11:40:40.963491 3257 kubelet_node_status.go:75] Successfully registered node my-cluster
Sep 15 11:40:40 my-cluster kubelet[3257]: I0915 11:40:40.972285 3257 reconciler.go:154] Reconciler: start to sync state
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.012512 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b3c511483", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047ecc7ead683, ext:5228955607, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047ecc7ead683, ext:5228955607, loc:(*time.Location)(0x6e0d080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.065637 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c58cd9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node my-cluster status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f4ed9, ext:5320474663, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f4ed9, ext:5320474663, loc:(*time.Location)(0x6e0d080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.118618 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c56d25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node my-cluster status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f2f25, ext:5320466548, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f2f25, ext:5320466548, loc:(*time.Location)(0x6e0d080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.171693 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c5846b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node my-cluster status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f466b, ext:5320472505, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f466b, ext:5320472505, loc:(*time.Location)(0x6e0d080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.224530 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b4374fac4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccf0ebcc4, ext:5348748820, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccf0ebcc4, ext:5348748820, loc:(*time.Location)(0x6e0d080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.280080 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c56d25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node my-cluster status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f2f25, ext:5320466548, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccf5ecc60, ext:5353995699, loc:(*time.Location)(0x6e0d080)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.334826 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c5846b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node my-cluster status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f466b, ext:5320472505, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccf5ed712, ext:5353998436, loc:(*time.Location)(0x6e0d080)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.390490 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c58cd9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node my-cluster status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f4ed9, ext:5320474663, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccf5ee07a, ext:5354000839, loc:(*time.Location)(0x6e0d080)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.466623 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c56d25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node my-cluster status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f2f25, ext:5320466548, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047ecdb9d04c2, ext:5559399961, loc:(*time.Location)(0x6e0d080)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:41 my-cluster kubelet[3257]: E0915 11:40:41.867293 3257 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"my-cluster.1634f22b41c5846b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"my-cluster", UID:"my-cluster", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node my-cluster status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"my-cluster"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfd047eccd5f466b, ext:5320472505, loc:(*time.Location)(0x6e0d080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfd047ecdb9d1679, ext:5559404496, loc:(*time.Location)(0x6e0d080)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.724156 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-ngjmz" (UniqueName: "kubernetes.io/secret/f87c81f2-8901-40fa-90de-d71fbee02b6c-storage-provisioner-token-ngjmz") pod "storage-provisioner" (UID: "f87c81f2-8901-40fa-90de-d71fbee02b6c")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.724184 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f87c81f2-8901-40fa-90de-d71fbee02b6c-tmp") pod "storage-provisioner" (UID: "f87c81f2-8901-40fa-90de-d71fbee02b6c")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.824462 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f5c34f1d-90d6-4891-b27a-423cb906ca55-lib-modules") pod "kube-proxy-jbmx2" (UID: "f5c34f1d-90d6-4891-b27a-423cb906ca55")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.824632 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f5c34f1d-90d6-4891-b27a-423cb906ca55-xtables-lock") pod "kube-proxy-jbmx2" (UID: "f5c34f1d-90d6-4891-b27a-423cb906ca55")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.824715 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f5c34f1d-90d6-4891-b27a-423cb906ca55-kube-proxy") pod "kube-proxy-jbmx2" (UID: "f5c34f1d-90d6-4891-b27a-423cb906ca55")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.824779 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-kzds8" (UniqueName: "kubernetes.io/secret/f5c34f1d-90d6-4891-b27a-423cb906ca55-kube-proxy-token-kzds8") pod "kube-proxy-jbmx2" (UID: "f5c34f1d-90d6-4891-b27a-423cb906ca55")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.824842 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/638b50ff-95de-4f10-a60c-7b13bc70ff30-config-volume") pod "coredns-5644d7b6d9-882h8" (UID: "638b50ff-95de-4f10-a60c-7b13bc70ff30")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.824973 3257 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-q4rvk" (UniqueName: "kubernetes.io/secret/638b50ff-95de-4f10-a60c-7b13bc70ff30-coredns-token-q4rvk") pod "coredns-5644d7b6d9-882h8" (UID: "638b50ff-95de-4f10-a60c-7b13bc70ff30")
Sep 15 11:40:49 my-cluster kubelet[3257]: I0915 11:40:49.967809 3257 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
Sep 15 11:40:49 my-cluster kubelet[3257]: W0915 11:40:49.968365 3257 reflector.go:299] object-"kube-system"/"storage-provisioner-token-ngjmz": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"storage-provisioner-token-ngjmz": Unexpected watch close - watch lasted less than a second and no items received
Sep 15 11:40:49 my-cluster kubelet[3257]: W0915 11:40:49.968445 3257 reflector.go:299] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kube-proxy": Unexpected watch close - watch lasted less than a second and no items received
Sep 15 11:40:49 my-cluster kubelet[3257]: W0915 11:40:49.968810 3257 reflector.go:299] object-"kube-system"/"coredns-token-q4rvk": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"coredns-token-q4rvk": Unexpected watch close - watch lasted less than a second and no items received
Sep 15 11:40:49 my-cluster kubelet[3257]: W0915 11:40:49.969321 3257 reflector.go:299] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"coredns": Unexpected watch close - watch lasted less than a second and no items received
Sep 15 11:40:49 my-cluster kubelet[3257]: W0915 11:40:49.969388 3257 reflector.go:299] object-"kube-system"/"kube-proxy-token-kzds8": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-proxy-token-kzds8": Unexpected watch close - watch lasted less than a second and no items received
Sep 15 11:40:50 my-cluster kubelet[3257]: W0915 11:40:50.281661 3257 pod_container_deletor.go:75] Container "4d43bf348abfe8b2b77f1ce0f427eafedf9def82e40e90b74dc728a24069765c" not found in pod's containers
Sep 15 11:40:50 my-cluster kubelet[3257]: W0915 11:40:50.400972 3257 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-882h8 through plugin: invalid network status for
Sep 15 11:40:50 my-cluster kubelet[3257]: W0915 11:40:50.449972 3257 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-882h8 through plugin: invalid network status for
Sep 15 11:40:50 my-cluster kubelet[3257]: W0915 11:40:50.543022 3257 pod_container_deletor.go:75] Container "df6acc1c47ed6081ae2149ca59536424ebd4d45da0b75fb663fd45e875601525" not found in pod's containers
Sep 15 11:40:51 my-cluster kubelet[3257]: W0915 11:40:51.549247 3257 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-882h8 through plugin: invalid network status for

==> storage-provisioner [5d70a2da1b6e] <==
F0915 11:41:20.443776 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

==> storage-provisioner [70b7e36e263c] <==
I0915 11:41:20.865601 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0915 11:41:20.872810 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0915 11:41:20.873068 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58a15527-487c-4d8d-a4cb-f7401997a9da", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' my-cluster_6e859847-140d-4bdc-9a92-1f08ae2db160 became leader
I0915 11:41:20.873662 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_my-cluster_6e859847-140d-4bdc-9a92-1f08ae2db160!
I0915 11:41:20.974383 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_my-cluster_6e859847-140d-4bdc-9a92-1f08ae2db160!

@woodcockjosh
Copy link
Contributor Author

I have ran the command again and it appears to be working now. I'm not sure if this is intermittent or if this was an issue with a previous version.

@tstromberg
Copy link
Contributor

I noticed your output mentions v1.12.3: I believe this bug was fixed in v1.13: #9033

Have you seen this with the latest release?

@tstromberg tstromberg added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Sep 15, 2020
@woodcockjosh
Copy link
Contributor Author

yeah I think when I upgraded it got fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

2 participants