Skip to content

hyperv re-use: k8s-app=kube-proxy: timed out waiting for the condition #4078

Closed
@guillaumeprevost

Description

@guillaumeprevost

Hi, I'm running into an issue while trying to start Kubernetes on Windows 10 with Hyper-V.

This is the first time I use Kubernetes, so it is possible that I'm making a novice mistake that seems obvious to experts, but since the console output suggests to create an issue, here I am!

I successfully installed Minikube, and trying to start it using the command: minikube start --vm-driver=hyperv

Here's the command line + console output:

> minikube start --vm-driver=hyperv

o   minikube v1.0.0 on windows (amd64)
$   Downloading Kubernetes v1.14.0 images in the background ...
i   Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
:   Re-using the currently running hyperv VM for "minikube" ...
:   Waiting for SSH access ...
-   "minikube" IP address is 192.168.43.99
-   Configuring Docker as the container runtime ...
-   Version of container runtime is 18.06.2-ce
:   Waiting for image downloads to complete ...
-   Preparing Kubernetes environment ...
-   Pulling images required by Kubernetes v1.14.0 ...
:   Relaunching Kubernetes v1.14.0 using kubeadm ...
:   Waiting for pods: apiserver proxy
!   Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new
X   Problems detected in "kube-addon-manager":
    - error: no objects passed to apply
    - error: no objects passed to apply
    - error: no objects passed to apply

Below is the outpout of the "minikube logs" command:

==> dmesg <==
[  +0.001882] rcu_sched kthread starved for 159035 jiffies! g257965 c257964 f0x2 RCU_GP_WAIT_FQS(3) ->state=0x0 ->cpu=1
[  +0.000000] Call Trace:
[  +0.000000]  ? __schedule+0x245/0x730
[  +0.000000]  ? __switch_to_asm+0x30/0x60
[  +0.000000]  schedule+0x23/0x80
[  +0.000000]  schedule_timeout+0x15c/0x350
[  +0.000000]  ? __next_timer_interrupt+0xc0/0xc0
[  +0.000000]  rcu_gp_kthread+0x5f0/0xe30
[  +0.000000]  ? __schedule+0x24d/0x730
[  +0.000000]  ? force_qs_rnp+0x180/0x180
[  +0.000000]  kthread+0x10e/0x130
[  +0.000000]  ? kthread_create_worker_on_cpu+0x40/0x40
[  +0.000000]  ret_from_fork+0x35/0x40
[Apr10 11:18] systemd[1]: systemd-logind.service: Watchdog timeout (limit 3min)!
[  +0.000225] systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)!
[  +0.001928] kauditd_printk_skb: 47 callbacks suppressed
[  +0.016151] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[  +0.000271] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[  +2.316301] systemd-journald[61572]: File /run/log/journal/df2dd68d7c414abf8720249fec4ae410/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Apr10 17:27] systemd[1]: systemd-resolved.service: Watchdog timeout (limit 3min)!
[Apr10 20:30] systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)!
[  +0.000303] systemd[1]: systemd-networkd.service: Watchdog timeout (limit 3min)!
[  +0.273843] systemd[1]: systemd-journald.service: Main process exited, code=dumped, status=6/ABRT
[  +0.000227] systemd[1]: systemd-journald.service: Failed with result 'watchdog'.
[  +2.654479] systemd-journald[5331]: File /run/log/journal/df2dd68d7c414abf8720249fec4ae410/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Apr10 22:23] systemd-fstab-generator[35298]: Ignoring "noauto" for root device
[Apr10 22:24] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000008] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.011389] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000008] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.002281] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000008] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.019706] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000009] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.009374] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000007] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[ +20.883613] systemd-fstab-generator[37480]: Ignoring "noauto" for root device
[  +0.165767] systemd-fstab-generator[37489]: Ignoring "noauto" for root device
[Apr10 23:27] systemd-fstab-generator[347]: Ignoring "noauto" for root device
[ +20.010239] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000007] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.015418] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000006] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.014777] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000009] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.017450] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000010] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.122156] kauditd_printk_skb: 47 callbacks suppressed
[ +20.390654] systemd-fstab-generator[2453]: Ignoring "noauto" for root device
[  +0.156030] systemd-fstab-generator[2474]: Ignoring "noauto" for root device

==> kernel <==
 23:43:31 up 15:14,  0 users,  load average: 0.31, 0.59, 0.58
Linux minikube 4.15.0 #1 SMP Tue Mar 26 02:53:14 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:36:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:36:34+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:37:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:37:34+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:38:33+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:38:35+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:39:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:39:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:40:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:40:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:41:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:41:33+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-04-10T23:42:31+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-04-10T23:42:33+00:00 ==
INFO: Leader is minikube

==> kube-apiserver <==
I0410 23:43:07.634733       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:07.634929       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:08.635154       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:08.635295       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:09.635518       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:09.635780       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:10.635936       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:10.636274       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:11.636308       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:11.636657       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:12.636945       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:12.637191       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:13.637475       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:13.637598       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:14.637888       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:14.638263       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:15.639042       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:15.639226       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:16.639528       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:16.639727       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:17.639977       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:17.640269       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:18.640579       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:18.640789       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:19.641051       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:19.641372       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:20.641637       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:20.642085       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:21.642420       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:21.642602       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:22.642907       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:22.643263       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:23.643557       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:23.643710       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:24.644020       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:24.644678       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:25.644924       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:25.645213       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:26.645578       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:26.645970       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:27.646299       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:27.646704       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:28.646981       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:28.647444       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:29.647766       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:29.648247       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:30.648496       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:30.648952       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0410 23:43:31.649624       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0410 23:43:31.650019       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

==> kube-scheduler <==
E0410 23:27:58.005559       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.007984       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.013460       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.020665       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.021037       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:58.021449       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.005490       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.007107       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.008014       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.009921       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.014241       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.017868       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.020553       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.023222       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.024426       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:27:59.025152       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.006455       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.007903       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.009593       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.011265       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.015141       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.019662       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.021481       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.024194       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.025145       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:00.027224       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.008189       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.008591       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.012874       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.013246       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.017428       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.021691       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.022702       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.026009       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.026781       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:01.028413       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0410 23:28:07.214882       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0410 23:28:07.215006       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0410 23:28:07.215105       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0410 23:28:07.215172       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0410 23:28:07.215186       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0410 23:28:07.215213       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0410 23:28:07.215263       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0410 23:28:07.215270       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0410 23:28:07.215294       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0410 23:28:07.215417       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0410 23:28:09.091222       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0410 23:28:09.191477       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0410 23:28:09.191581       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0410 23:28:26.000656       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2019-04-10 08:29:44 UTC, end at Wed 2019-04-10 23:43:31 UTC. --
Apr 10 23:31:44 minikube kubelet[3928]: E0410 23:31:44.684813    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:31:56 minikube kubelet[3928]: E0410 23:31:56.684643    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:32:10 minikube kubelet[3928]: E0410 23:32:10.685698    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:32:23 minikube kubelet[3928]: E0410 23:32:23.685095    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
...
...
[repeats about 50 times]
...
...
Apr 10 23:43:01 minikube kubelet[3928]: E0410 23:43:01.684909    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:43:12 minikube kubelet[3928]: E0410 23:43:12.684798    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"
Apr 10 23:43:27 minikube kubelet[3928]: E0410 23:43:27.684718    3928 pod_workers.go:190] Error syncing pod a3b51c45-5b6d-11e9-a82e-00155db3a128 ("storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a3b51c45-5b6d-11e9-a82e-00155db3a128)"

==> storage-provisioner <==
F0410 23:39:28.513732       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

Am I doing something wrong?
Is it linked to poor internet connection?
Is it an actual bug?

Thanks you for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    co/hypervHyperV related issuesco/kube-proxyissues relating to kube-proxy in some waytriage/duplicateIndicates an issue is a duplicate of other open issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions