Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperv re-use: k8s-app=kube-proxy: timed out waiting for the condition #4078

Closed
guillaumeprevost opened this issue Apr 11, 2019 · 3 comments
Closed
Labels
co/hyperv HyperV related issues co/kube-proxy issues relating to kube-proxy in some way triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@guillaumeprevost
Copy link

Hi, I'm running into an issue while trying to start Kubernetes on Windows 10 with Hyper-V.

This is the first time I use Kubernetes, so it is possible that I'm making a novice mistake that seems obvious to experts, but since the console output suggests to create an issue, here I am!

I successfully installed Minikube, and trying to start it using the command: minikube start --vm-driver=hyperv

Here's the command line + console output:

> minikube start --vm-driver=hyperv

o   minikube v1.0.0 on windows (amd64)
$   Downloading Kubernetes v1.14.0 images in the background ...
i   Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
:   Re-using the currently running hyperv VM for "minikube" ...
:   Waiting for SSH access ...
-   "minikube" IP address is 192.168.43.99
-   Configuring Docker as the container runtime ...
-   Version of container runtime is 18.06.2-ce
:   Waiting for image downloads to complete ...
-   Preparing Kubernetes environment ...
-   Pulling images required by Kubernetes v1.14.0 ...
:   Relaunching Kubernetes v1.14.0 using kubeadm ...
:   Waiting for pods: apiserver proxy
!   Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new
X   Problems detected in "kube-addon-manager":
    - error: no objects passed to apply
    - error: no objects passed to apply
    - error: no objects passed to apply

Below is the outpout of the "minikube logs" command:

==> dmesg <==
[  +0.001882] rcu_sched kthread starved for 159035 jiffies! g257965 c257964 f0x2 RCU_GP_WAIT_FQS(3) ->state=0x0 ->cpu=1
[  +0.000000] Call Trace:
[  +0.000000]  ? __schedule+0x245/0x730
[  +0.000000]  ? __switch_to_asm+0x30/0x60
[  +0.000000]  schedule+0x23/0x80
[  +0.000000]  schedule_timeout+0x15c/0x350
[  +0.000000]  ? __next_timer_interrupt+0xc0/0xc0
[  +0.000000]  rcu_gp_kthread+0x5f0/0xe30
[  +0.000000]  ? __schedule+0x24d/0x730
[  +0.000000]  ? force_qs_rnp+0x180/0x180
[  +0.000000]  kthread+0x10e/0x130
[  +0.000000]  ? kthread_create_worker_on_cpu+0x40/0x40
[  +0.000000]  ret_from_fork+0x35/0x40
[Apr10 11:18] systemd[1]: systemd-logind.service: Watchdog timeout (limit 3min)!
[  +0.000225] systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)!
[  +0.001928] kauditd_printk_skb: 47 callbacks suppressed
[  +0.016151] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[  +0.000271] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[  +2.316301] systemd-journald[61572]: File /run/log/journal/df2dd68d7c414abf8720249fec4ae410/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Apr10 17:27] systemd[1]: systemd-resolved.service: Watchdog timeout (limit 3min)!
[Apr10 20:30] systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)!
[  +0.000303] systemd[1]: systemd-networkd.service: Watchdog timeout (limit 3min)!
[  +0.273843] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[  +0.000227] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[  +2.654479] systemd-journald[5331]: File /run/log/journal/df2dd68d7c414abf8720249fec4ae410/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Apr10 22:23] systemd-fstab-generator[35298]: Ignoring "noauto" for root device
[Apr10 22:24] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000008] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.011389] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000008] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.002281] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000008] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.019706] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000009] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.009374] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000007] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[ +20.883613] systemd-fstab-generator[37480]: Ignoring "noauto" for root device
[  +0.165767] systemd-fstab-generator[37489]: Ignoring "noauto" for root device
[Apr10 23:27] systemd-fstab-generator[347]: Ignoring "noauto" for root device
[ +20.010239] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000007] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.015418] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000006] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.014777] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000009] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.017450] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000010] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.122156] kauditd_printk_skb: 47 callbacks suppressed
[ +20.390654] systemd-fstab-generator[2453]: Ignoring "noauto" for root device
[  +0.156030] systemd-fstab-generator[2474]: Ignoring "noauto" for root device

==> kernel <==
 23:43:31 up 15:14,  0 users,  load average: 0.31, 0.59, 0.58
Linux minikube 4.15.0 #1 SMP Tue Mar 26 02:53:14 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:36:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:36:34+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:37:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:37:34+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:38:33+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:38:35+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:39:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:39:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:40:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:40:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:41:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:41:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:42:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:42:33+00:00 ==
INFO: Leader is minikube

==> kube-apiserver <==
I0410 23:43:07.634733       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:07.634929       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:08.635154       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:08.635295       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:09.635518       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:09.635780       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:10.635936       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:10.636274       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:11.636308       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:11.636657       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:12.636945       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:12.637191       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:13.637475       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:13.637598       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:14.637888       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:14.638263       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:15.639042       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:15.639226       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:16.639528       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:16.639727       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:17.639977       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:17.640269       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:18.640579       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:18.640789       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:19.641051       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:19.641372       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:20.641637       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:20.642085       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:21.642420       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:21.642602       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:22.642907       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:22.643263       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:23.643557       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:23.643710       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:24.644020       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:24.644678       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:25.644924       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:25.645213       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:26.645578       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:26.645970       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:27.646299       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:27.646704       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:28.646981       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:28.647444       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:29.647766       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:29.648247       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:30.648496       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:30.648952       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:31.649624       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:31.650019       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

==> kube-scheduler <==
E0410 23:27:58.005559       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.007984       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.013460       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.020665       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.021037       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.021449       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.005490       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.007107       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.008014       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.009921       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.014241       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.017868       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.020553       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.023222       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.024426       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.025152       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.006455       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.007903       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.009593       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.011265       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.015141       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.019662       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.021481       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.024194       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.025145       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.027224       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.008189       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.008591       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.012874       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.013246       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.017428       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.021691       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.022702       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.026009       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.026781       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.028413       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:07.214882       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0410 23:28:07.215006       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0410 23:28:07.215105       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0410 23:28:07.215172       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0410 23:28:07.215186       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0410 23:28:07.215213       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0410 23:28:07.215263       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0410 23:28:07.215270       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0410 23:28:07.215294       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0410 23:28:07.215417       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0410 23:28:09.091222       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0410 23:28:09.191477       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0410 23:28:09.191581       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0410 23:28:26.000656       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2019-04-10 08:29:44 UTC, end at Wed 2019-04-10 23:43:31 UTC. --
Apr 10 23:31:44 minikube kubelet[3928]: E0410 23:31:44.684813    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:31:56 minikube kubelet[3928]: E0410 23:31:56.684643    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:32:10 minikube kubelet[3928]: E0410 23:32:10.685698    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:32:23 minikube kubelet[3928]: E0410 23:32:23.685095    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
...
...
[repeats about 50 times]
...
...
Apr 10 23:43:01 minikube kubelet[3928]: E0410 23:43:01.684909    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:43:12 minikube kubelet[3928]: E0410 23:43:12.684798    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:43:27 minikube kubelet[3928]: E0410 23:43:27.684718    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"

==> storage-provisioner <==
F0410 23:39:28.513732       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

Am I doing something wrong?
Is it linked to poor internet connection?
Is it an actual bug?

Thanks you for your help!

@cben
Copy link

cben commented Apr 11, 2019

I've got similar error message on linux, Fedora 29.
fwiw, I am short on disk space, one of the filesystems had just 1~2GB free.

$ minikube-v1.0.0 start --kubernetes-version=v1.14.0 --cache-images

😄  minikube v1.0.0 on linux (amd64)
🤹  Downloading Kubernetes v1.14.0 images in the background ...
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing virtualbox VM for "minikube" ...
⌛  Waiting for SSH access ...
📶  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.06.2-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.14.0 ...
🔄  Relaunching Kubernetes v1.14.0 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy
💣  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
❌  Problems detected in "kube-addon-manager":
    error: noINFO: o =bj= ects passed to apply
    error: no objects pasINseFOd to apply
    error: no objects passINFO:ed == K tuberneteo apply

I have no idea what messed up the last few lines 🤷‍♂️
I didn't launch background jobs in this terminal before minikube.

minikube-v1.0.0 logs:

==> dmesg <==
[  +5.002159] hpet1: lost 318 rtc interrupts
[  +5.002109] hpet1: lost 318 rtc interrupts
[  +5.002594] hpet1: lost 318 rtc interrupts
[  +5.003387] hpet1: lost 318 rtc interrupts
[  +5.000590] hpet1: lost 318 rtc interrupts
[  +5.001466] hpet1: lost 318 rtc interrupts
[  +5.001240] hpet1: lost 318 rtc interrupts
[Apr11 14:32] hpet1: lost 318 rtc interrupts
[  +5.001650] hpet1: lost 318 rtc interrupts
[  +5.004258] hpet1: lost 319 rtc interrupts
[  +5.002929] hpet1: lost 318 rtc interrupts
[  +5.001080] hpet1: lost 318 rtc interrupts
[  +5.001693] hpet1: lost 319 rtc interrupts
[  +5.013100] hpet1: lost 320 rtc interrupts
[  +5.002012] hpet1: lost 319 rtc interrupts
[  +5.001450] hpet1: lost 318 rtc interrupts
[  +5.006160] hpet1: lost 320 rtc interrupts
[  +5.000942] hpet1: lost 318 rtc interrupts
[  +5.002592] hpet1: lost 318 rtc interrupts
[Apr11 14:33] hpet1: lost 319 rtc interrupts
[  +5.003233] hpet1: lost 318 rtc interrupts
[  +5.004139] hpet1: lost 318 rtc interrupts
[  +5.002324] hpet1: lost 318 rtc interrupts
[  +5.000837] hpet1: lost 319 rtc interrupts
[  +5.000389] hpet1: lost 318 rtc interrupts
[  +5.001655] hpet1: lost 318 rtc interrupts
[  +5.000494] hpet1: lost 318 rtc interrupts
[  +5.002464] hpet1: lost 318 rtc interrupts
[  +5.001182] hpet1: lost 318 rtc interrupts
[  +5.001723] hpet1: lost 318 rtc interrupts
[  +5.000553] hpet1: lost 318 rtc interrupts
[Apr11 14:34] hpet1: lost 318 rtc interrupts
[  +5.001112] hpet1: lost 318 rtc interrupts
[  +5.000990] hpet1: lost 318 rtc interrupts
[  +5.000428] hpet1: lost 318 rtc interrupts
[  +5.000921] hpet1: lost 318 rtc interrupts
[  +5.001456] hpet1: lost 318 rtc interrupts
[  +5.001186] hpet1: lost 319 rtc interrupts
[  +5.000723] hpet1: lost 318 rtc interrupts
[  +5.000703] hpet1: lost 318 rtc interrupts
[  +5.001292] hpet1: lost 318 rtc interrupts
[  +5.001275] hpet1: lost 318 rtc interrupts
[  +5.001967] hpet1: lost 319 rtc interrupts
[Apr11 14:35] hpet1: lost 318 rtc interrupts
[  +5.001627] hpet1: lost 318 rtc interrupts
[  +5.000767] hpet1: lost 318 rtc interrupts
[  +5.004776] hpet1: lost 318 rtc interrupts
[  +5.000779] hpet1: lost 318 rtc interrupts
[  +5.001560] hpet1: lost 318 rtc interrupts
[  +5.001096] hpet1: lost 319 rtc interrupts

==> kernel <==
 14:35:38 up  3:48,  0 users,  load average: 0.19, 0.31, 0.35
Linux minikube 4.15.0 #1 SMP Tue Mar 26 02:53:14 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
eINFOrror: no obj: == Kubernetesects p addon asreconcile sceompd to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
leted at 2019-04-11T14:28:13+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:29:11+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:29:13+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:30:11+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:30:13+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:31:11+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:31:13+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:32:12+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:32:13+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:33:12+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:33:14+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:34:12+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:34:13+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-11T14:35:12+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-11T14:35:13+00:00 ==

==> kube-apiserver <==
I0411 14:35:14.462344       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:14.462705       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:15.462901       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:15.463091       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:16.463284       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:16.463439       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:17.463738       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:17.463998       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:18.464325       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:18.464740       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:19.465557       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:19.465870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:20.466310       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:20.466522       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:21.466993       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:21.467307       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:22.467709       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:22.467865       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:23.468147       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:23.468600       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:24.469014       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:24.469691       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:25.470702       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:25.471145       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:26.471536       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:26.471895       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:27.472143       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:27.472747       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:28.473468       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:28.474212       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:29.474456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:29.474607       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:30.475144       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:30.475654       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:31.476032       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:31.476537       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:32.476897       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:32.477084       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:33.477589       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:33.477800       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:34.478025       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:34.478681       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:35.478822       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:35.479219       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:36.479432       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:36.479888       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:37.480103       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:37.480605       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0411 14:35:38.481359       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0411 14:35:38.481543       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

==> kube-scheduler <==
I0411 10:48:05.926887       1 serving.go:319] Generated self-signed cert in-memory
W0411 10:48:06.758205       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0411 10:48:06.758332       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0411 10:48:06.758463       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0411 10:48:06.841308       1 server.go:142] Version: v1.14.0
I0411 10:48:06.841773       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0411 10:48:06.871129       1 authorization.go:47] Authorization is disabled
W0411 10:48:06.871195       1 authentication.go:55] Authentication is disabled
I0411 10:48:06.871478       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0411 10:48:06.872148       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0411 10:48:10.999561       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0411 10:48:10.999673       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0411 10:48:12.283259       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0411 10:48:12.384093       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0411 10:48:12.384184       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0411 10:48:29.932837       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Thu 2019-04-11 10:47:19 UTC, end at Thu 2019-04-11 14:35:38 UTC. --
Apr 11 14:23:34 minikube kubelet[3035]: E0411 14:23:34.977659    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:23:48 minikube kubelet[3035]: E0411 14:23:48.978091    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:23:59 minikube kubelet[3035]: E0411 14:23:59.977923    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:24:43 minikube kubelet[3035]: E0411 14:24:43.754424    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:24:56 minikube kubelet[3035]: E0411 14:24:56.977902    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:25:09 minikube kubelet[3035]: E0411 14:25:09.979023    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:25:21 minikube kubelet[3035]: E0411 14:25:21.979504    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:25:36 minikube kubelet[3035]: E0411 14:25:36.977399    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:25:49 minikube kubelet[3035]: E0411 14:25:49.978425    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:26:01 minikube kubelet[3035]: E0411 14:26:01.978891    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:26:15 minikube kubelet[3035]: E0411 14:26:15.977415    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:26:30 minikube kubelet[3035]: E0411 14:26:30.978882    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:26:41 minikube kubelet[3035]: E0411 14:26:41.981644    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:26:56 minikube kubelet[3035]: E0411 14:26:56.978706    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:27:11 minikube kubelet[3035]: E0411 14:27:11.980668    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:27:25 minikube kubelet[3035]: E0411 14:27:25.978537    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:27:39 minikube kubelet[3035]: E0411 14:27:39.979418    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:27:54 minikube kubelet[3035]: E0411 14:27:54.977382    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:28:06 minikube kubelet[3035]: E0411 14:28:06.977537    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:28:17 minikube kubelet[3035]: E0411 14:28:17.977944    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:28:30 minikube kubelet[3035]: E0411 14:28:30.978136    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:28:45 minikube kubelet[3035]: E0411 14:28:45.979368    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:28:56 minikube kubelet[3035]: E0411 14:28:56.977232    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:29:07 minikube kubelet[3035]: E0411 14:29:07.978439    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:29:20 minikube kubelet[3035]: E0411 14:29:20.978463    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:29:31 minikube kubelet[3035]: E0411 14:29:31.979625    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:30:14 minikube kubelet[3035]: E0411 14:30:14.860139    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:30:26 minikube kubelet[3035]: E0411 14:30:26.978014    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:30:38 minikube kubelet[3035]: E0411 14:30:38.978170    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:30:50 minikube kubelet[3035]: E0411 14:30:50.977830    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:31:04 minikube kubelet[3035]: E0411 14:31:04.977676    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:31:15 minikube kubelet[3035]: E0411 14:31:15.978105    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:31:27 minikube kubelet[3035]: E0411 14:31:27.979111    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:31:40 minikube kubelet[3035]: E0411 14:31:40.978355    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:31:51 minikube kubelet[3035]: E0411 14:31:51.980737    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:32:02 minikube kubelet[3035]: E0411 14:32:02.977274    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:32:15 minikube kubelet[3035]: E0411 14:32:15.982701    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:32:29 minikube kubelet[3035]: E0411 14:32:29.990506    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:32:42 minikube kubelet[3035]: E0411 14:32:42.977822    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:32:55 minikube kubelet[3035]: E0411 14:32:55.978996    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:33:07 minikube kubelet[3035]: E0411 14:33:07.979268    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:33:20 minikube kubelet[3035]: E0411 14:33:20.977884    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:33:33 minikube kubelet[3035]: E0411 14:33:33.978607    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:33:47 minikube kubelet[3035]: E0411 14:33:47.977518    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:33:58 minikube kubelet[3035]: E0411 14:33:58.977416    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:34:09 minikube kubelet[3035]: E0411 14:34:09.977428    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:34:24 minikube kubelet[3035]: E0411 14:34:24.991427    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:34:35 minikube kubelet[3035]: E0411 14:34:35.978743    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:34:50 minikube kubelet[3035]: E0411 14:34:50.977658    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"
Apr 11 14:35:03 minikube kubelet[3035]: E0411 14:35:03.976846    3035 pod_workers.go:190] Error syncing pod 972fe832-5b74-11e9-911c-080027c91b8e ("storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(972fe832-5b74-11e9-911c-080027c91b8e)"

==> storage-provisioner <==

In the messed up output department, note eINFOrror: no obj: == Kubernetesects p addon asreconcile sceompd to apply line 😄
Moreover, I've re-run minikube logs a few times, and got some different mangled texts in the kube-addon-manager segment. So I guess some parallel writes...

@tstromberg tstromberg changed the title Windows+HyperV - error starting Minikube - "kube-addon-manager": error: no objects passed to apply hyperv: k8s-app=kube-proxy: timed out waiting for the condition Apr 11, 2019
@tstromberg tstromberg added co/hyperv HyperV related issues co/kube-proxy issues relating to kube-proxy in some way labels Apr 11, 2019
@tstromberg tstromberg changed the title hyperv: k8s-app=kube-proxy: timed out waiting for the condition hyperv re-use: k8s-app=kube-proxy: timed out waiting for the condition Apr 11, 2019
@tstromberg
Copy link
Contributor

@guillaumeprevost - It appears that the kube-proxy pod didn't start up at all here. Though I can't tell from the logs. This may be a dupe of #4034 - but it's difficult to say.

It's possible that #4014 may fix it, but in the mean time, you can work around this by using minikube delete and recreating this cluster.

@tstromberg tstromberg added the triage/duplicate Indicates an issue is a duplicate of other open issue. label Apr 11, 2019
@tstromberg
Copy link
Contributor

Marking as dupe of #3850

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperv HyperV related issues co/kube-proxy issues relating to kube-proxy in some way triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

3 participants