Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPN causes: waiting for apiserver: timed out waiting for the condition #4302

Closed
nwyatt opened this issue May 21, 2019 · 17 comments
Closed

VPN causes: waiting for apiserver: timed out waiting for the condition #4302

nwyatt opened this issue May 21, 2019 · 17 comments
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code top-10-issues Top 10 support issues

Comments

@nwyatt
Copy link

nwyatt commented May 21, 2019

The exact command to reproduce the issue:

minikube start

The full output of the command that failed:

🤹 Downloading Kubernetes v1.14.1 images in the background ...
🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.3-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
💾 Downloading kubeadm v1.14.1
💾 Downloading kubelet v1.14.1
🚜 Pulling images required by Kubernetes v1.14.1 ...
🚀 Launching Kubernetes v1.14.1 using kubeadm ...
⌛ Waiting for pods: apiserver
💣 Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
❌ Problems detected in "kube-addon-manager":
error: no objeINctsF passed tO: ==o a Entepply
error: no objects passed tINFO:o apply
error: no objects passcluedsterrolebind to aing.rbacppl.authorizaty

The output of the minikube logs command:

==> coredns <==
.:53
2019-05-21T01:00:44.788Z [INFO] CoreDNS-1.3.1
2019-05-21T01:00:44.788Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-05-21T01:00:44.789Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> dmesg <==
[ +5.000714] hpet1: lost 318 rtc interrupts
[ +5.001340] hpet1: lost 318 rtc interrupts
[ +5.000573] hpet1: lost 318 rtc interrupts
[May21 01:10] hpet1: lost 318 rtc interrupts
[ +5.002394] hpet1: lost 319 rtc interrupts
[ +4.999774] hpet1: lost 319 rtc interrupts
[ +5.001611] hpet1: lost 318 rtc interrupts
[ +5.003641] hpet1: lost 318 rtc interrupts
[ +5.002075] hpet1: lost 319 rtc interrupts
[ +5.003913] hpet1: lost 318 rtc interrupts
[ +5.004012] hpet1: lost 318 rtc interrupts
[ +5.003597] hpet1: lost 319 rtc interrupts
[ +5.001587] hpet1: lost 318 rtc interrupts
[ +5.003505] hpet1: lost 319 rtc interrupts
[ +5.003605] hpet1: lost 318 rtc interrupts
[May21 01:11] hpet1: lost 319 rtc interrupts
[ +5.000461] hpet1: lost 318 rtc interrupts
[ +5.001194] hpet1: lost 318 rtc interrupts
[ +5.000937] hpet1: lost 318 rtc interrupts
[ +5.000990] hpet1: lost 318 rtc interrupts
[ +5.001918] hpet1: lost 318 rtc interrupts
[ +5.000062] hpet1: lost 318 rtc interrupts
[ +5.001929] hpet1: lost 318 rtc interrupts
[ +5.000839] hpet1: lost 318 rtc interrupts
[ +5.001052] hpet1: lost 318 rtc interrupts
[ +5.003339] hpet1: lost 318 rtc interrupts
[ +5.002997] hpet1: lost 320 rtc interrupts
[May21 01:12] hpet1: lost 318 rtc interrupts
[ +5.001611] hpet1: lost 318 rtc interrupts
[ +5.002261] hpet1: lost 318 rtc interrupts
[ +5.001908] hpet1: lost 318 rtc interrupts
[ +5.000722] hpet1: lost 318 rtc interrupts
[ +5.003967] hpet1: lost 318 rtc interrupts
[ +5.004101] hpet1: lost 319 rtc interrupts
[ +5.001064] hpet1: lost 319 rtc interrupts
[ +5.001861] hpet1: lost 319 rtc interrupts
[ +4.999888] hpet1: lost 319 rtc interrupts
[ +5.003522] hpet1: lost 318 rtc interrupts
[ +5.004202] hpet1: lost 318 rtc interrupts
[May21 01:13] hpet1: lost 319 rtc interrupts
[ +5.001042] hpet1: lost 318 rtc interrupts
[ +5.005190] hpet1: lost 320 rtc interrupts
[ +5.001376] hpet1: lost 318 rtc interrupts
[ +5.001892] hpet1: lost 318 rtc interrupts
[ +5.001782] hpet1: lost 318 rtc interrupts
[ +5.001287] hpet1: lost 318 rtc interrupts
[ +5.001858] hpet1: lost 318 rtc interrupts
[ +5.001843] hpet1: lost 318 rtc interrupts
[ +5.001376] hpet1: lost 318 rtc interrupts
[ +5.001668] hpet1: lost 319 rtc interrupts

==> kernel <==
01:13:55 up 16 min, 0 users, load average: 0.46, 0.28, 0.27
Linux minikube 4.15.0 #1 SMP Thu Apr 25 20:51:48 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: no objects passed to apply
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:06:12+00:00 ==
IeNFO: rLreaoder is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:07:10+00:00 ==
r: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:07:12+00:00 ==
eINFO: Leader is mrrinor: ikubeno o
bjects passed to apply
error: no objects passed to apply
error: no objects passed Ito applyNFO: == Ku
bernetes addon ensure errorcompleted at 2: n0o objects passed to apply
19-05-21T01:08:10+00:00 ==
error: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:08:11+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:09:11+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:09:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:10:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:10:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:11:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:11:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:12:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:12:12+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-21T01:13:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-21T01:13:11+00:00 ==

==> kube-apiserver <==
I0521 01:00:01.934081 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0521 01:00:01.972553 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0521 01:00:02.011988 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0521 01:00:02.053148 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0521 01:00:02.091327 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0521 01:00:02.132058 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0521 01:00:02.173028 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0521 01:00:02.213673 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0521 01:00:02.253271 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0521 01:00:02.292192 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0521 01:00:02.332378 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0521 01:00:02.372968 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0521 01:00:02.413483 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0521 01:00:02.453303 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0521 01:00:02.491931 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0521 01:00:02.532516 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0521 01:00:02.574068 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0521 01:00:02.615355 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0521 01:00:02.651882 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0521 01:00:02.692424 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0521 01:00:02.732514 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0521 01:00:02.772421 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0521 01:00:02.813252 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0521 01:00:02.851364 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0521 01:00:02.893279 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0521 01:00:02.933081 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0521 01:00:02.971367 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0521 01:00:02.974040 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0521 01:00:03.015618 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0521 01:00:03.051902 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0521 01:00:03.091517 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0521 01:00:03.131413 1 controller.go:606] quota admission added evaluator for: endpoints
I0521 01:00:03.132330 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0521 01:00:03.173998 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0521 01:00:03.215681 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0521 01:00:03.249995 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0521 01:00:03.253558 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0521 01:00:03.296259 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0521 01:00:03.333993 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0521 01:00:03.378831 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0521 01:00:03.433266 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0521 01:00:03.455323 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0521 01:00:03.496190 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W0521 01:00:03.649536 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.100]
I0521 01:00:04.359725 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0521 01:00:04.913323 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0521 01:00:05.189354 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0521 01:00:11.306246 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0521 01:00:11.337800 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
E0521 01:12:27.783315 1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-proxy <==
W0521 01:00:12.973115 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0521 01:00:13.106524 1 server_others.go:147] Using iptables Proxier.
W0521 01:00:13.114072 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0521 01:00:13.120602 1 server.go:555] Version: v1.14.1
I0521 01:00:13.178350 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0521 01:00:13.178427 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0521 01:00:13.178971 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0521 01:00:13.183047 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0521 01:00:13.183101 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0521 01:00:13.183183 1 config.go:202] Starting service config controller
I0521 01:00:13.183210 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0521 01:00:13.183221 1 config.go:102] Starting endpoints config controller
I0521 01:00:13.183227 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0521 01:00:13.286639 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0521 01:00:13.286768 1 controller_utils.go:1034] Caches are synced for service config controller

==> kube-scheduler <==
I0521 00:59:57.399041 1 serving.go:319] Generated self-signed cert in-memory
W0521 00:59:57.691838 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0521 00:59:57.691863 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0521 00:59:57.691873 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0521 00:59:57.694717 1 server.go:142] Version: v1.14.1
I0521 00:59:57.694764 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0521 00:59:57.696397 1 authorization.go:47] Authorization is disabled
W0521 00:59:57.696422 1 authentication.go:55] Authentication is disabled
I0521 00:59:57.696434 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0521 00:59:57.697263 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0521 01:00:00.194839 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0521 01:00:00.205644 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0521 01:00:00.210168 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0521 01:00:00.215920 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0521 01:00:00.216970 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0521 01:00:00.217143 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0521 01:00:00.217356 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0521 01:00:00.218615 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0521 01:00:00.219952 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0521 01:00:00.220817 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0521 01:00:01.197279 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0521 01:00:01.207802 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0521 01:00:01.212044 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0521 01:00:01.217864 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0521 01:00:01.218677 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0521 01:00:01.220656 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0521 01:00:01.220826 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0521 01:00:01.221785 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0521 01:00:01.223468 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0521 01:00:01.224690 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0521 01:00:03.100821 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0521 01:00:03.203038 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0521 01:00:03.203245 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0521 01:00:03.209227 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Tue 2019-05-21 00:58:03 UTC, end at Tue 2019-05-21 01:13:55 UTC. --
May 21 00:59:58 minikube kubelet[3219]: I0521 00:59:58.173979 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:58 minikube kubelet[3219]: I0521 00:59:58.174274 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:58 minikube kubelet[3219]: I0521 00:59:58.174569 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.271869 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.373672 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.474065 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.574671 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.674900 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.775786 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.876074 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:58 minikube kubelet[3219]: E0521 00:59:58.976206 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.077716 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: I0521 00:59:59.176564 3219 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.178368 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.278575 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.379668 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.479896 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.580916 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.682320 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.782839 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.883482 3219 kubelet.go:2244] node "minikube" not found
May 21 00:59:59 minikube kubelet[3219]: E0521 00:59:59.983911 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.084967 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.186713 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.242761 3219 controller.go:194] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.287339 3219 kubelet.go:2244] node "minikube" not found
May 21 01:00:00 minikube kubelet[3219]: I0521 01:00:00.287400 3219 reconciler.go:154] Reconciler: start to sync state
May 21 01:00:00 minikube kubelet[3219]: I0521 01:00:00.301917 3219 kubelet_node_status.go:75] Successfully registered node minikube
May 21 01:00:00 minikube kubelet[3219]: E0521 01:00:00.327055 3219 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: namespaces "kube-node-lease" not found
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.057721 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6d61f4257", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f14287ec5e57, ext:152723365, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f14287ec5e57, ext:152723365, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.113062 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db396a74", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d068674, ext:238323679, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d068674, ext:238323679, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.171969 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db38eed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d060ad1, ext:238292012, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d060ad1, ext:238292012, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.230649 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db397c0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d06980f, ext:238328168, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d06980f, ext:238328168, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.289852 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db38eed1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d060ad1, ext:238292012, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e269453, ext:257201569, loc:(*time.Location)(0x800e8e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.346943 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db396a74", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d068674, ext:238323679, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e26a274, ext:257205195, loc:(*time.Location)(0x800e8e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.412734 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6db397c0f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428d06980f, ext:238328168, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e26ae16, ext:257208165, loc:(*time.Location)(0x800e8e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:03 minikube kubelet[3219]: E0521 01:00:03.473622 3219 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a08ce6dc9031b7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e5d4db7, ext:260787980, loc:(*time.Location)(0x800e8e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf30f1428e5d4db7, ext:260787980, loc:(*time.Location)(0x800e8e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.450791 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c8dfa847-7b63-11e9-867f-0800279b5afd-kube-proxy") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.451376 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/c8dfa847-7b63-11e9-867f-0800279b5afd-xtables-lock") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.451455 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-npzwt" (UniqueName: "kubernetes.io/secret/c8dfa847-7b63-11e9-867f-0800279b5afd-kube-proxy-token-npzwt") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.451509 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/c8dfa847-7b63-11e9-867f-0800279b5afd-lib-modules") pod "kube-proxy-6j77l" (UID: "c8dfa847-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.552803 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c8dc2d71-7b63-11e9-867f-0800279b5afd-config-volume") pod "coredns-fb8b8dccf-6dflw" (UID: "c8dc2d71-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.553724 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-twpd7" (UniqueName: "kubernetes.io/secret/c8dc2d71-7b63-11e9-867f-0800279b5afd-coredns-token-twpd7") pod "coredns-fb8b8dccf-6dflw" (UID: "c8dc2d71-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.553961 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c8d985e8-7b63-11e9-867f-0800279b5afd-config-volume") pod "coredns-fb8b8dccf-n5s8g" (UID: "c8d985e8-7b63-11e9-867f-0800279b5afd")
May 21 01:00:11 minikube kubelet[3219]: I0521 01:00:11.554050 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-twpd7" (UniqueName: "kubernetes.io/secret/c8d985e8-7b63-11e9-867f-0800279b5afd-coredns-token-twpd7") pod "coredns-fb8b8dccf-n5s8g" (UID: "c8d985e8-7b63-11e9-867f-0800279b5afd")
May 21 01:00:12 minikube kubelet[3219]: I0521 01:00:12.056906 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-nxpcd" (UniqueName: "kubernetes.io/secret/c93abb08-7b63-11e9-867f-0800279b5afd-storage-provisioner-token-nxpcd") pod "storage-provisioner" (UID: "c93abb08-7b63-11e9-867f-0800279b5afd")
May 21 01:00:12 minikube kubelet[3219]: I0521 01:00:12.056950 3219 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/c93abb08-7b63-11e9-867f-0800279b5afd-tmp") pod "storage-provisioner" (UID: "c93abb08-7b63-11e9-867f-0800279b5afd")
May 21 01:00:12 minikube kubelet[3219]: W0521 01:00:12.489905 3219 pod_container_deletor.go:75] Container "39b0d11437407df2e84d8dde7f186925c1dfb0a0241dd0c9febf58d8aa558b81" not found in pod's containers
May 21 01:00:12 minikube kubelet[3219]: W0521 01:00:12.529307 3219 container.go:409] Failed to create summary reader for "/system.slice/run-rd90405dcc3304eef8455488f89f786ef.scope": none of the resources are being tracked.
May 21 01:00:13 minikube kubelet[3219]: W0521 01:00:13.108753 3219 pod_container_deletor.go:75] Container "d275bdcacb5ea334f17970fb1a7071b14a0bdceb03bdefca48c8df3d0d0306ba" not found in pod's containers

==> storage-provisioner <==

The operating system version:

MacOS 10.13.6

Output of minikube status

host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

@tstromberg
Copy link
Contributor

Hey @nwyatt!

First, my apologies for the poor user experience here. Things look healthy in the logs, so it seems like a bug on our part. We just changed the logic for apiserver checking in minikube v1.1.0, do you mind upgrading to v1.1.0 to check if it fixes your particular case?

Many thanks for the bug report!

@tstromberg tstromberg added co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver labels May 22, 2019
@tstromberg
Copy link
Contributor

I believe this issue was resolved in the v1.1.0 release. Please try upgrading to the latest release of minikube and run minikube delete to remove the previous cluster state.

If the same issue occurs, please re-open this bug. Thank you opening this bug report, and for your patience!

@nwyatt
Copy link
Author

nwyatt commented May 23, 2019

Retried with 1.1.0 and it seems to hang on "Launching Kubernetes". Will try a couple things and reopen if I can't get it to work.

The exact command to reproduce the issue:

minikube start

The full output of the command that failed:

😄 minikube v1.1.0 on darwin (amd64)
💿 Downloading Minikube ISO ...
131.28 MB / 131.28 MB [============================================] 100.00% 0s
🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
💾 Downloading kubelet v1.14.2
💾 Downloading kubeadm v1.14.2
🚜 Pulling images ...
🚀 Launching Kubernetes ...

The output of the minikube logs command:

==> coredns <==
.:53
2019-05-23T19:43:19.667Z [INFO] CoreDNS-1.3.1
2019-05-23T19:43:19.667Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-05-23T19:43:19.667Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> dmesg <==
[ +5.004669] hpet1: lost 319 rtc interrupts
[ +5.004093] hpet1: lost 318 rtc interrupts
[ +5.002439] hpet1: lost 318 rtc interrupts
[ +5.003315] hpet1: lost 318 rtc interrupts
[ +5.001468] hpet1: lost 318 rtc interrupts
[ +5.004962] hpet1: lost 319 rtc interrupts
[ +5.004849] hpet1: lost 318 rtc interrupts
[ +5.004227] hpet1: lost 318 rtc interrupts
[ +5.004557] hpet1: lost 319 rtc interrupts
[ +5.001957] hpet1: lost 318 rtc interrupts
[May23 19:53] hpet1: lost 319 rtc interrupts
[ +5.001030] hpet1: lost 318 rtc interrupts
[ +5.002575] hpet1: lost 318 rtc interrupts
[ +5.000426] hpet1: lost 318 rtc interrupts
[ +5.001926] hpet1: lost 318 rtc interrupts
[ +5.001234] hpet1: lost 318 rtc interrupts
[ +5.001860] hpet1: lost 319 rtc interrupts
[ +5.002248] hpet1: lost 318 rtc interrupts
[ +5.001412] hpet1: lost 318 rtc interrupts
[ +5.004479] hpet1: lost 318 rtc interrupts
[ +5.004806] hpet1: lost 318 rtc interrupts
[ +5.004002] hpet1: lost 319 rtc interrupts
[May23 19:54] hpet1: lost 318 rtc interrupts
[ +5.002942] hpet1: lost 318 rtc interrupts
[ +5.004408] hpet1: lost 318 rtc interrupts
[ +5.004011] hpet1: lost 319 rtc interrupts
[ +5.005805] hpet1: lost 319 rtc interrupts
[ +5.004452] hpet1: lost 319 rtc interrupts
[ +5.002757] hpet1: lost 318 rtc interrupts
[ +5.003276] hpet1: lost 318 rtc interrupts
[ +5.002594] hpet1: lost 319 rtc interrupts
[ +5.001535] hpet1: lost 318 rtc interrupts
[ +5.001429] hpet1: lost 318 rtc interrupts
[ +5.000329] hpet1: lost 318 rtc interrupts
[May23 19:55] hpet1: lost 318 rtc interrupts
[ +5.001326] hpet1: lost 318 rtc interrupts
[ +5.001922] hpet1: lost 318 rtc interrupts
[ +5.000344] hpet1: lost 318 rtc interrupts
[ +5.000335] hpet1: lost 318 rtc interrupts
[ +5.002124] hpet1: lost 318 rtc interrupts
[ +5.002012] hpet1: lost 319 rtc interrupts
[ +5.001898] hpet1: lost 318 rtc interrupts
[ +5.002814] hpet1: lost 318 rtc interrupts
[ +5.000976] hpet1: lost 319 rtc interrupts
[ +5.000312] hpet1: lost 318 rtc interrupts
[ +5.000979] hpet1: lost 318 rtc interrupts
[May23 19:56] hpet1: lost 318 rtc interrupts
[ +5.000751] hpet1: lost 318 rtc interrupts
[ +5.001513] hpet1: lost 318 rtc interrupts
[ +5.001573] hpet1: lost 318 rtc interrupts

==> kernel <==
19:56:21 up 15 min, 0 users, load average: 0.11, 0.21, 0.24
Linux minikube 4.15.0 #1 SMP Tue May 21 00:14:40 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Kubernetes addon reconcile completed at 2019-05-23T19:48:49+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:49:47+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
error: no objects passed to apply
error: no objects passed to apply
INFO: == Kubernetes addon recoerncile completed at 2019-05-23T19:49:49+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:50:47+00:00 ==
rorINFO:: no objects pas ==sed to apply
Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-23T19:50:48+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:51:48+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-23T19:51:49+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:52:47+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconecirle completed raor: nt 2019o ob-jec0ts pa5s-2s3ed Tto ap19:5p2:49ly
+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:53:47+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-23T19:53:49+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:54:47+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-23T19:54:49+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-23T19:55:47+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-23T19:55:48+00:00 ==

==> kube-apiserver <==
I0523 19:42:38.854338 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0523 19:42:38.897277 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0523 19:42:38.934271 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0523 19:42:38.982790 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0523 19:42:39.014240 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0523 19:42:39.056025 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0523 19:42:39.095872 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0523 19:42:39.134782 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0523 19:42:39.174411 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0523 19:42:39.213906 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0523 19:42:39.254154 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0523 19:42:39.294808 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0523 19:42:39.335247 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0523 19:42:39.376174 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0523 19:42:39.413548 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0523 19:42:39.453942 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0523 19:42:39.498596 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0523 19:42:39.534488 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0523 19:42:39.597492 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0523 19:42:39.618738 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0523 19:42:39.655268 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0523 19:42:39.697830 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0523 19:42:39.734633 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0523 19:42:39.775148 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0523 19:42:39.818464 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0523 19:42:39.855126 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0523 19:42:39.858070 1 controller.go:606] quota admission added evaluator for: endpoints
I0523 19:42:39.891157 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0523 19:42:39.894715 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0523 19:42:39.935074 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0523 19:42:39.973786 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0523 19:42:40.018756 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0523 19:42:40.054666 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0523 19:42:40.095059 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0523 19:42:40.137010 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0523 19:42:40.172156 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0523 19:42:40.174339 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0523 19:42:40.214789 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0523 19:42:40.254542 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0523 19:42:40.294752 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0523 19:42:40.333745 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0523 19:42:40.372951 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0523 19:42:40.420627 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W0523 19:42:40.472465 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.103]
I0523 19:42:40.531707 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0523 19:42:41.090075 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0523 19:42:41.803315 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0523 19:42:42.067999 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0523 19:42:47.528088 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0523 19:42:47.782816 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-proxy <==
W0523 19:42:49.677708 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0523 19:42:49.693741 1 server_others.go:146] Using iptables Proxier.
W0523 19:42:49.694024 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0523 19:42:49.694126 1 server.go:562] Version: v1.14.2
I0523 19:42:49.708367 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0523 19:42:49.708401 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0523 19:42:49.709073 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0523 19:42:49.713046 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0523 19:42:49.713112 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0523 19:42:49.713223 1 config.go:102] Starting endpoints config controller
I0523 19:42:49.713244 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0523 19:42:49.713262 1 config.go:202] Starting service config controller
I0523 19:42:49.713271 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0523 19:42:49.817481 1 controller_utils.go:1034] Caches are synced for service config controller
I0523 19:42:49.817544 1 controller_utils.go:1034] Caches are synced for endpoints config controller

==> kube-scheduler <==
I0523 19:42:32.806091 1 serving.go:319] Generated self-signed cert in-memory
W0523 19:42:33.135429 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0523 19:42:33.135456 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0523 19:42:33.135466 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0523 19:42:33.139229 1 server.go:142] Version: v1.14.2
I0523 19:42:33.139509 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0523 19:42:33.140860 1 authorization.go:47] Authorization is disabled
W0523 19:42:33.140876 1 authentication.go:55] Authentication is disabled
I0523 19:42:33.140883 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0523 19:42:33.141237 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0523 19:42:37.122490 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0523 19:42:37.191037 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0523 19:42:37.191308 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0523 19:42:37.191333 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0523 19:42:37.191365 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0523 19:42:37.191391 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0523 19:42:37.191410 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0523 19:42:37.191434 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0523 19:42:37.194113 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0523 19:42:37.194125 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0523 19:42:38.123508 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0523 19:42:38.194169 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0523 19:42:38.197386 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0523 19:42:38.212750 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0523 19:42:38.219276 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0523 19:42:38.219340 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0523 19:42:38.223981 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0523 19:42:38.225198 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0523 19:42:38.231413 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0523 19:42:38.232227 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0523 19:42:40.047267 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0523 19:42:40.147657 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0523 19:42:40.147835 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0523 19:42:40.153108 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Thu 2019-05-23 19:41:03 UTC, end at Thu 2019-05-23 19:56:21 UTC. --
May 23 19:42:35 minikube kubelet[3217]: E0523 19:42:35.534158 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:35 minikube kubelet[3217]: E0523 19:42:35.634446 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:35 minikube kubelet[3217]: E0523 19:42:35.734604 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:35 minikube kubelet[3217]: E0523 19:42:35.834747 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:35 minikube kubelet[3217]: E0523 19:42:35.934999 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.036373 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.136733 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.237746 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.337919 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.438320 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.538824 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.639136 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.739589 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.841010 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:36 minikube kubelet[3217]: E0523 19:42:36.941514 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.044468 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.145204 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:37 minikube kubelet[3217]: I0523 19:42:37.241269 3217 reconciler.go:154] Reconciler: start to sync state
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.245353 3217 kubelet.go:2244] node "minikube" not found
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.247041 3217 controller.go:194] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
May 23 19:42:37 minikube kubelet[3217]: I0523 19:42:37.255900 3217 kubelet_node_status.go:75] Successfully registered node minikube
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.318501 3217 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: namespaces "kube-node-lease" not found
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.319005 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a1675279af18c3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc97a04e6c3, ext:140522855, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc97a04e6c3, ext:140522855, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.373778 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0e0c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836be4c5, ext:224525658, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836be4c5, ext:224525658, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.429542 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0bfd2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bc3d2, ext:224517223, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bc3d2, ext:224517223, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.485773 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0d767", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bdb67, ext:224523261, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bdb67, ext:224523261, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.543539 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527fd174cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9848c78cd, ext:243437916, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9848c78cd, ext:243437916, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.604785 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0bfd2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bc3d2, ext:224517223, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc984b3dac7, ext:246018914, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.664671 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0d767", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bdb67, ext:224523261, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc984b3fa20, ext:246026939, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.724852 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0e0c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836be4c5, ext:224525658, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc984b40354, ext:246029295, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:37 minikube kubelet[3217]: E0523 19:42:37.794854 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0e0c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836be4c5, ext:224525658, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc990d942dd, ext:449796987, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:38 minikube kubelet[3217]: E0523 19:42:38.172870 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0bfd2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bc3d2, ext:224517223, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc990d923a4, ext:449788987, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:38 minikube kubelet[3217]: E0523 19:42:38.576469 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0d767", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bdb67, ext:224523261, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc990d93885, ext:449794338, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:39 minikube kubelet[3217]: E0523 19:42:39.001377 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0bfd2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bc3d2, ext:224517223, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc990dddda1, ext:450098752, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:39 minikube kubelet[3217]: E0523 19:42:39.375519 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0d767", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bdb67, ext:224523261, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc990ddeb9e, ext:450102332, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:39 minikube kubelet[3217]: E0523 19:42:39.775442 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0e0c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836be4c5, ext:224525658, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc990ddf5f7, ext:450104980, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:40 minikube kubelet[3217]: E0523 19:42:40.176428 3217 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15a167527eb0bfd2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9836bc3d2, ext:224517223, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf31dbc9912a8b9d, ext:455124026, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.712765 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f120e82c-7d92-11e9-ac90-08002783e279-config-volume") pod "coredns-fb8b8dccf-5l92c" (UID: "f120e82c-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.712806 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-kslcw" (UniqueName: "kubernetes.io/secret/f120e82c-7d92-11e9-ac90-08002783e279-coredns-token-kslcw") pod "coredns-fb8b8dccf-5l92c" (UID: "f120e82c-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.712824 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f122b184-7d92-11e9-ac90-08002783e279-config-volume") pod "coredns-fb8b8dccf-qrndg" (UID: "f122b184-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.712841 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-kslcw" (UniqueName: "kubernetes.io/secret/f122b184-7d92-11e9-ac90-08002783e279-coredns-token-kslcw") pod "coredns-fb8b8dccf-qrndg" (UID: "f122b184-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: E0523 19:42:47.801914 3217 reflector.go:126] object-"kube-system"/"kube-proxy-token-b5bp2": Failed to list *v1.Secret: secrets "kube-proxy-token-b5bp2" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
May 23 19:42:47 minikube kubelet[3217]: E0523 19:42:47.803741 3217 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.913970 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-b5bp2" (UniqueName: "kubernetes.io/secret/f1429d40-7d92-11e9-ac90-08002783e279-kube-proxy-token-b5bp2") pod "kube-proxy-fq7nj" (UID: "f1429d40-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.914022 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f1429d40-7d92-11e9-ac90-08002783e279-kube-proxy") pod "kube-proxy-fq7nj" (UID: "f1429d40-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.914041 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f1429d40-7d92-11e9-ac90-08002783e279-lib-modules") pod "kube-proxy-fq7nj" (UID: "f1429d40-7d92-11e9-ac90-08002783e279")
May 23 19:42:47 minikube kubelet[3217]: I0523 19:42:47.914059 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f1429d40-7d92-11e9-ac90-08002783e279-xtables-lock") pod "kube-proxy-fq7nj" (UID: "f1429d40-7d92-11e9-ac90-08002783e279")
May 23 19:42:48 minikube kubelet[3217]: W0523 19:42:48.928472 3217 container.go:409] Failed to create summary reader for "/system.slice/run-r33a4fbba1a7c473392aaa93de7d8dcae.scope": none of the resources are being tracked.
May 23 19:42:49 minikube kubelet[3217]: I0523 19:42:49.333095 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-sntm7" (UniqueName: "kubernetes.io/secret/f2232ae0-7d92-11e9-ac90-08002783e279-storage-provisioner-token-sntm7") pod "storage-provisioner" (UID: "f2232ae0-7d92-11e9-ac90-08002783e279")
May 23 19:42:49 minikube kubelet[3217]: I0523 19:42:49.333235 3217 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f2232ae0-7d92-11e9-ac90-08002783e279-tmp") pod "storage-provisioner" (UID: "f2232ae0-7d92-11e9-ac90-08002783e279")

==> storage-provisioner <==

@nwyatt
Copy link
Author

nwyatt commented May 24, 2019

Yes it looks like I have the same or similar problem on 1.1.0

😄 minikube v1.1.0 on darwin (amd64)
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🏃 Re-using the currently running virtualbox VM for "minikube" ...
⌛ Waiting for SSH access ...
🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
🔄 Relaunching Kubernetes v1.14.2 using kubeadm ...

💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new

@nwyatt
Copy link
Author

nwyatt commented May 24, 2019

I don't think I have permissions to reopen this issue.

@tstromberg
Copy link
Contributor

tstromberg commented May 24, 2019 via email

@k8s-ci-robot
Copy link
Contributor

@tstromberg: Reopened this issue.

In response to this:

/reopen

On Thu, May 23, 2019, 5:31 PM Nat Wyatt notifications@github.com wrote:

I don't think I have permissions to reopen this issue.


You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
#4302,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAYYMD36LNCJTM4T7HAWBDPW4ZPDANCNFSM4HOGSONA
.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this May 24, 2019
@nwyatt
Copy link
Author

nwyatt commented May 24, 2019

I made a little progress on this. I was able to get through the quickstart by adding a route manually as described in #3747.

sudo route add 192.168.99.0/24 -iface vboxnet0

vboxnet0 is added by virtualbox but for somereason the MacOS kernel isn't routing to it?

@nwyatt
Copy link
Author

nwyatt commented May 24, 2019

..and as mentioned in that issue, the Cisco Anyconnect VPN is implicated. When I run the VPN everything gets routed through the VPN's gateway and it has no idea about the 192.168.99.0 network. If I try to add the route, I get an error.

$ sudo route add 192.168.99.0/24 -iface vboxnet0
route: writing to routing socket: File exists
add net 192.168.99.0: gateway vboxnet0: File exists

When I shut down the VPN, the route that I manually added is still there, but it doesn't get used (as reported by traceroute). But if I re-add it, everything is good again.

So if this is a minikube issue, it's only about the diagnostics. The cause is the VPN.

  1. Can't run minkube while connected to the VPN
  2. Need to re-add a route to VirtualBox after disconnecting from the VPN

@tstromberg tstromberg changed the title Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition 1.1, start: waiting for component=kube-apiserver: timed out waiting for the condition May 24, 2019
@tstromberg tstromberg changed the title 1.1, start: waiting for component=kube-apiserver: timed out waiting for the condition 1.1, start with VPN: waiting for component=kube-apiserver: timed out waiting for the condition May 24, 2019
@tstromberg tstromberg changed the title 1.1, start with VPN: waiting for component=kube-apiserver: timed out waiting for the condition 1.1, start with VPN: Error restarting cluster: waiting for apiserver: timed out waiting for the condition May 24, 2019
@tstromberg
Copy link
Contributor

tstromberg commented May 24, 2019

Ouch. Thanks for the additional diagnostics here.

Does the behavior change at all if you are starting from a fresh VM, like using minikube delete to first erase the old one? I suspect not, but it I'm just curious.

@tstromberg
Copy link
Contributor

I agree with your assessment that this issue is almost certainly VPN related. This sounds very much like this bug report: https://www.virtualbox.org/ticket/14293

I don't think this is DNS related, but do you mind trying this workaround? I see it mentioned on several forums where people are talking about solving Cisco AnyConnect and VirtualBox networking issues:

VBoxManage modifyvm minikube --natdnshostresolver1 on

Alternatively, it's possible that Cisco AnyConnect with the hyperkit VM driver may behave differently. Thanks for the very detailed bug report. It's definitely helpful.

@tstromberg tstromberg added needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels May 29, 2019
@lxghtless
Copy link

lxghtless commented Jun 3, 2019

I'm experiencing this issue verbatim, except I don't have to re-add sudo route add 192.168.99.0/24 -iface vboxnet0 when I shut down VPN nor do I have to toggle my WiFi on and off edit: I have to toggle wifi on and off after switching off VPN for my minikube to work... no need to restart minikube though. I also tried VBoxManage modifyvm minikube --natdnshostresolver1 on as suggested by @tstromberg , but that didn't result in different behavior.

@sharifelgamal sharifelgamal added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 18, 2019
@tstromberg tstromberg added kind/support Categorizes issue or PR as a support question. and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Sep 18, 2019
@tstromberg tstromberg changed the title 1.1, start with VPN: Error restarting cluster: waiting for apiserver: timed out waiting for the condition VPN causes: waiting for apiserver: timed out waiting for the condition Sep 20, 2019
@tstromberg
Copy link
Contributor

minikube v1.4 does a slightly better job of debugging this issue, but in the end, it now comes down to making sure the VPN is configured to allow access to the virtual machine launched.

We now have some documentation around this: https://minikube.sigs.k8s.io/docs/reference/networking/vpn/

@tstromberg tstromberg added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 20, 2019
@tstromberg
Copy link
Contributor

Closing because this should no longer happen in minikube v1.6: at least for the same reasons.

@artem-kosenko
Copy link

Closing because this should no longer happen in minikube v1.6: at least for the same reasons.

minikube version: v1.23.2 - still have the same issue this under the Cisco Any Connect VPN.
Looks like the issue was not resolved yet.

@589290
Copy link

589290 commented Jan 18, 2022

Still have the same issue today with minikube 1.24 and Cisco VPN -- clearly not resolved -- even with allow LAN traffic in Cisco's options.

@stevelaclasse
Copy link

Please take a look a this comments : #1099 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code top-10-issues Top 10 support issues
Projects
None yet
Development

No branches or pull requests

8 participants