Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to disable Heapster after enabling it in minikube on Windows #4848

Closed
blueelvis opened this issue Jul 23, 2019 · 2 comments
Closed

Unable to disable Heapster after enabling it in minikube on Windows #4848

blueelvis opened this issue Jul 23, 2019 · 2 comments
Labels
area/addons co/hyperv HyperV related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@blueelvis
Copy link
Contributor

Taking this from here - #4783 (comment)

I am also able to reproduce the issue. After enabling heapster, I am not able to disable it. Following are the logs.

The exact command to reproduce the issue:
.\minikube-windows-amd64.exe addons disable heapster --alsologtostderr --v=8

The full output of the command that failed:


PS C:\utilities> .\minikube-windows-amd64.exe addons disable heapster --alsologtostderr --v=8
I0723 14:26:17.144698 6784 notify.go:124] Checking for updates...
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state
[stdout =====>] : Running

[stderr =====>] :
[executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0]
[stdout =====>] : 172.17.88.150

[stderr =====>] :
W0723 14:26:20.250139 6784 exit.go:99] disable failed: [disabling addon deploy/addons/heapster/influx-grafana-rc.yaml.tmpl: Process exited with status 1]
*
X disable failed: [disabling addon deploy/addons/heapster/influx-grafana-rc.yaml.tmpl: Process exited with status 1]
*

The output of the minikube logs command:

PS C:\utilities> .\minikube-windows-amd64.exe logs

  • ==> coredns <==
  • .:53
  • 2019-07-23T08:50:24.015Z [INFO] CoreDNS-1.3.1
  • 2019-07-23T08:50:24.015Z [INFO] linux/amd64, go1.11.4, 6b56a9c
  • CoreDNS-1.3.1
  • linux/amd64, go1.11.4, 6b56a9c
  • 2019-07-23T08:50:24.015Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
  • ==> dmesg <==
  • [Jul23 08:47] smpboot: 128 Processors exceeds NR_CPUS limit of 64
  • [ +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
  • [ +0.014595] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
  •           * this clock source is slow. Consider trying other clock sources
    
  • [Jul23 08:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
  • [ +0.988204] systemd-fstab-generator[1225]: Ignoring "noauto" for root device
  • [ +0.017808] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
  • [ +0.000006] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
  • [ +0.152777] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
  • [ +4.384551] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
  • [ +0.187116] vboxguest: loading out-of-tree module taints kernel.
  • [ +0.020091] vboxguest: PCI device not found, probably running on physical hardware.
  • [ +17.363738] systemd-fstab-generator[2532]: Ignoring "noauto" for root device
  • [ +19.199534] systemd-fstab-generator[3192]: Ignoring "noauto" for root device
  • [Jul23 08:49] kauditd_printk_skb: 104 callbacks suppressed
  • [ +11.010233] tee (3985): /proc/3656/oom_adj is deprecated, please use /proc/3656/oom_score_adj instead.
  • [ +0.283982] kauditd_printk_skb: 20 callbacks suppressed
  • [ +21.552523] kauditd_printk_skb: 38 callbacks suppressed
  • [Jul23 08:50] kauditd_printk_skb: 2 callbacks suppressed
  • [ +17.343420] NFSD: Unable to end grace period: -110
  • [ +11.464456] kauditd_printk_skb: 2 callbacks suppressed
  • [Jul23 08:51] kauditd_printk_skb: 20 callbacks suppressed
  • [Jul23 08:52] kauditd_printk_skb: 2 callbacks suppressed
  • ==> kernel <==
  • 08:59:05 up 11 min, 0 users, load average: 0.07, 0.25, 0.22
  • Linux minikube 4.15.0 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Sun Jun 23 23:02:01 PDT 2019 x86_64 GNU/Linux
  • ==> kube-addon-manager <==
  • error: no objects passed to apply
  • error: no objects passed to apply
  • error: no objects passed to apply
  • service/kubernetes-dashboard unchanged
  • service/monitoring-grafana unchanged
  • replicationcontroller/heapster unchanged
  • service/heapster unchanged
  • replicationcontroller/influxdb-grafana unchanged
  • service/monitoring-influxdb unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2019-07-23T08:55:33+00:00 ==
  • INFO: Leader is minikube
  • INFO: == Kubernetes addon ensure completed at 2019-07-23T08:56:30+00:00 ==
  • INFO: == Reconciling with deprecated label ==
  • INFO: == Reconciling with addon-manager label ==
  • deployment.apps/kubernetes-dashboard unchanged
  • service/kubernetes-dashboard unchanged
  • service/monitoring-grafana unchanged
  • replicationcontroller/heapster unchanged
  • service/heapster unchanged
  • replicationcontroller/influxdb-grafana unchanged
  • service/monitoring-influxdb unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2019-07-23T08:56:32+00:00 ==
  • INFO: Leader is minikube
  • INFO: == Kubernetes addon ensure completed at 2019-07-23T08:57:30+00:00 ==
  • INFO: == Reconciling with deprecated label ==
  • INFO: == Reconciling with addon-manager label ==
  • deployment.apps/kubernetes-dashboard unchanged
  • service/kubernetes-dashboard unchanged
  • service/monitoring-grafana unchanged
  • replicationcontroller/heapster unchanged
  • service/heapster unchanged
  • replicationcontroller/influxdb-grafana unchanged
  • service/monitoring-influxdb unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2019-07-23T08:57:32+00:00 ==
  • INFO: Leader is minikube
  • INFO: == Kubernetes addon ensure completed at 2019-07-23T08:58:30+00:00 ==
  • INFO: == Reconciling with deprecated label ==
  • INFO: == Reconciling with addon-manager label ==
  • deployment.apps/kubernetes-dashboard unchanged
  • service/kubernetes-dashboard unchanged
  • service/monitoring-grafana unchanged
  • replicationcontroller/heapster unchanged
  • service/heapster unchanged
  • replicationcontroller/influxdb-grafana unchanged
  • service/monitoring-influxdb unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2019-07-23T08:58:32+00:00 ==
  • ==> kube-apiserver <==
  • E0723 08:49:27.142204 1 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
  • E0723 08:49:27.142267 1 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
  • E0723 08:49:27.142393 1 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
  • E0723 08:49:27.142458 1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
  • E0723 08:49:27.142494 1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
  • E0723 08:49:27.142514 1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
  • I0723 08:49:27.142535 1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
  • I0723 08:49:27.142544 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
  • I0723 08:49:27.144116 1 client.go:354] parsed scheme: ""
  • I0723 08:49:27.144133 1 client.go:354] scheme "" not registered, fallback to default scheme
  • I0723 08:49:27.144162 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
  • I0723 08:49:27.144202 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
  • I0723 08:49:27.152545 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
  • I0723 08:49:27.153306 1 client.go:354] parsed scheme: ""
  • I0723 08:49:27.153444 1 client.go:354] scheme "" not registered, fallback to default scheme
  • I0723 08:49:27.153605 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
  • I0723 08:49:27.153848 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
  • I0723 08:49:27.162401 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
  • I0723 08:49:29.117330 1 secure_serving.go:116] Serving securely on [::]:8443
  • I0723 08:49:29.117407 1 available_controller.go:374] Starting AvailableConditionController
  • I0723 08:49:29.117451 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
  • I0723 08:49:29.118979 1 crd_finalizer.go:255] Starting CRDFinalizer
  • I0723 08:49:29.119188 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
  • I0723 08:49:29.119321 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
  • I0723 08:49:29.119438 1 controller.go:81] Starting OpenAPI AggregationController
  • I0723 08:49:29.120702 1 controller.go:83] Starting OpenAPI controller
  • I0723 08:49:29.120983 1 customresource_discovery_controller.go:208] Starting DiscoveryController
  • I0723 08:49:29.121185 1 naming_controller.go:288] Starting NamingConditionController
  • I0723 08:49:29.121435 1 establishing_controller.go:73] Starting EstablishingController
  • I0723 08:49:29.121549 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
  • I0723 08:49:29.124072 1 autoregister_controller.go:140] Starting autoregister controller
  • I0723 08:49:29.124107 1 cache.go:32] Waiting for caches to sync for autoregister controller
  • I0723 08:49:29.193279 1 crdregistration_controller.go:112] Starting crd-autoregister controller
  • I0723 08:49:29.193312 1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
  • E0723 08:49:29.203530 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.88.150, ResourceVersion: 0, AdditionalErrorMsg:
  • I0723 08:49:29.324626 1 cache.go:39] Caches are synced for autoregister controller
  • I0723 08:49:29.324901 1 cache.go:39] Caches are synced for AvailableConditionController controller
  • I0723 08:49:29.325137 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
  • I0723 08:49:29.372669 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
  • I0723 08:49:29.400617 1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
  • I0723 08:49:30.114895 1 controller.go:107] OpenAPI AggregationController: Processing item
  • I0723 08:49:30.114958 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
  • I0723 08:49:30.114985 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
  • I0723 08:49:30.140862 1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
  • I0723 08:49:30.851892 1 controller.go:606] quota admission added evaluator for: serviceaccounts
  • I0723 08:49:30.891097 1 controller.go:606] quota admission added evaluator for: deployments.apps
  • I0723 08:49:30.987490 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
  • I0723 08:49:31.034282 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
  • I0723 08:49:31.072493 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
  • I0723 08:49:46.263787 1 controller.go:606] quota admission added evaluator for: endpoints
  • ==> kube-proxy <==
  • W0723 08:49:34.154762 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
  • I0723 08:49:34.197583 1 server_others.go:143] Using iptables Proxier.
  • W0723 08:49:34.198550 1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
  • I0723 08:49:34.199452 1 server.go:534] Version: v1.15.0
  • I0723 08:49:34.212613 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
  • I0723 08:49:34.212711 1 conntrack.go:52] Setting nf_conntrack_max to 131072
  • I0723 08:49:34.213956 1 conntrack.go:83] Setting conntrack hashsize to 32768
  • I0723 08:49:34.221581 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
  • I0723 08:49:34.221835 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
  • I0723 08:49:34.222275 1 config.go:96] Starting endpoints config controller
  • I0723 08:49:34.222299 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
  • I0723 08:49:34.222329 1 config.go:187] Starting service config controller
  • I0723 08:49:34.222341 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
  • I0723 08:49:34.322613 1 controller_utils.go:1036] Caches are synced for service config controller
  • I0723 08:49:34.322833 1 controller_utils.go:1036] Caches are synced for endpoints config controller
  • ==> kube-scheduler <==
  • I0723 08:49:23.852525 1 serving.go:319] Generated self-signed cert in-memory
  • W0723 08:49:24.399595 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
  • W0723 08:49:24.399654 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
  • W0723 08:49:24.399673 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
  • I0723 08:49:24.418448 1 server.go:142] Version: v1.15.0
  • I0723 08:49:24.419149 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
  • W0723 08:49:24.420987 1 authorization.go:47] Authorization is disabled
  • W0723 08:49:24.421029 1 authentication.go:55] Authentication is disabled
  • I0723 08:49:24.421049 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
  • I0723 08:49:24.430804 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
  • E0723 08:49:29.220132 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0723 08:49:29.258731 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • E0723 08:49:29.258857 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • E0723 08:49:29.258936 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0723 08:49:29.258986 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0723 08:49:29.259033 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0723 08:49:29.263949 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • E0723 08:49:29.264149 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0723 08:49:29.264188 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0723 08:49:29.264501 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • I0723 08:49:31.134449 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
  • I0723 08:49:46.266901 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
  • ==> kubelet <==
  • -- Logs begin at Tue 2019-07-23 08:48:17 UTC, end at Tue 2019-07-23 14:18:16 UTC. --
  • Jul 23 08:49:29 minikube kubelet[3311]: E0723 08:49:29.269532 3311 reflector.go:125] object-"kube-system"/"kube-proxy-token-knrpk": Failed to list *v1.Secret: secrets "kube-proxy-token-knrpk" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
  • Jul 23 08:49:29 minikube kubelet[3311]: E0723 08:49:29.269773 3311 reflector.go:125] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
  • Jul 23 08:49:29 minikube kubelet[3311]: E0723 08:49:29.269905 3311 reflector.go:125] object-"kube-system"/"coredns-token-65pqz": Failed to list *v1.Secret: secrets "coredns-token-65pqz" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
  • Jul 23 08:49:29 minikube kubelet[3311]: E0723 08:49:29.270098 3311 reflector.go:125] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
  • Jul 23 08:49:29 minikube kubelet[3311]: E0723 08:49:29.294965 3311 reflector.go:125] object-"kube-system"/"default-token-j8lqn": Failed to list *v1.Secret: secrets "default-token-j8lqn" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
  • Jul 23 08:49:29 minikube kubelet[3311]: E0723 08:49:29.295749 3311 reflector.go:125] object-"kube-system"/"storage-provisioner-token-mp7pw": Failed to list *v1.Secret: secrets "storage-provisioner-token-mp7pw" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368325 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-65pqz" (UniqueName: "kubernetes.io/secret/34c9bed2-afad-4da9-b788-29e2c1f46554-coredns-token-65pqz") pod "coredns-5c98db65d4-6w2n2" (UID: "34c9bed2-afad-4da9-b788-29e2c1f46554")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368475 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-mp7pw" (UniqueName: "kubernetes.io/secret/448796aa-fbf9-4427-81e2-6ad92919386d-storage-provisioner-token-mp7pw") pod "storage-provisioner" (UID: "448796aa-fbf9-4427-81e2-6ad92919386d")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368564 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/a6a89899-58be-40d5-aeaa-af07a0873c2c-xtables-lock") pod "kube-proxy-62m5k" (UID: "a6a89899-58be-40d5-aeaa-af07a0873c2c")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368646 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/a6a89899-58be-40d5-aeaa-af07a0873c2c-lib-modules") pod "kube-proxy-62m5k" (UID: "a6a89899-58be-40d5-aeaa-af07a0873c2c")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368733 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-j8lqn" (UniqueName: "kubernetes.io/secret/a7ecb45b-ace6-4460-918a-0dcb92f8cbd2-default-token-j8lqn") pod "kubernetes-dashboard-7b8ddcb5d6-v8v8t" (UID: "a7ecb45b-ace6-4460-918a-0dcb92f8cbd2")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368803 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34c9bed2-afad-4da9-b788-29e2c1f46554-config-volume") pod "coredns-5c98db65d4-6w2n2" (UID: "34c9bed2-afad-4da9-b788-29e2c1f46554")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368873 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-65pqz" (UniqueName: "kubernetes.io/secret/4961d461-7e4a-4988-9417-c14671dfa86e-coredns-token-65pqz") pod "coredns-5c98db65d4-5t8qw" (UID: "4961d461-7e4a-4988-9417-c14671dfa86e")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.368943 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/448796aa-fbf9-4427-81e2-6ad92919386d-tmp") pod "storage-provisioner" (UID: "448796aa-fbf9-4427-81e2-6ad92919386d")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.369014 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a6a89899-58be-40d5-aeaa-af07a0873c2c-kube-proxy") pod "kube-proxy-62m5k" (UID: "a6a89899-58be-40d5-aeaa-af07a0873c2c")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.369197 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-knrpk" (UniqueName: "kubernetes.io/secret/a6a89899-58be-40d5-aeaa-af07a0873c2c-kube-proxy-token-knrpk") pod "kube-proxy-62m5k" (UID: "a6a89899-58be-40d5-aeaa-af07a0873c2c")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.369404 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4961d461-7e4a-4988-9417-c14671dfa86e-config-volume") pod "coredns-5c98db65d4-5t8qw" (UID: "4961d461-7e4a-4988-9417-c14671dfa86e")
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.469925 3311 reconciler.go:150] Reconciler: start to sync state
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.479042 3311 kubelet_node_status.go:114] Node minikube was previously registered
  • Jul 23 08:49:29 minikube kubelet[3311]: I0723 08:49:29.479371 3311 kubelet_node_status.go:75] Successfully registered node minikube
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.470637 3311 secret.go:198] Couldn't get secret kube-system/coredns-token-65pqz: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.471522 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/4961d461-7e4a-4988-9417-c14671dfa86e-coredns-token-65pqz" ("4961d461-7e4a-4988-9417-c14671dfa86e")" failed. No retries permitted until 2019-07-23 08:49:30.9714899 +0000 UTC m=+31.235778201 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-65pqz" (UniqueName: "kubernetes.io/secret/4961d461-7e4a-4988-9417-c14671dfa86e-coredns-token-65pqz") pod "coredns-5c98db65d4-5t8qw" (UID: "4961d461-7e4a-4988-9417-c14671dfa86e") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.473188 3311 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.473463 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/4961d461-7e4a-4988-9417-c14671dfa86e-config-volume" ("4961d461-7e4a-4988-9417-c14671dfa86e")" failed. No retries permitted until 2019-07-23 08:49:30.9734377 +0000 UTC m=+31.237726001 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4961d461-7e4a-4988-9417-c14671dfa86e-config-volume") pod "coredns-5c98db65d4-5t8qw" (UID: "4961d461-7e4a-4988-9417-c14671dfa86e") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.475158 3311 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.475518 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/34c9bed2-afad-4da9-b788-29e2c1f46554-config-volume" ("34c9bed2-afad-4da9-b788-29e2c1f46554")" failed. No retries permitted until 2019-07-23 08:49:30.9753573 +0000 UTC m=+31.239645701 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34c9bed2-afad-4da9-b788-29e2c1f46554-config-volume") pod "coredns-5c98db65d4-6w2n2" (UID: "34c9bed2-afad-4da9-b788-29e2c1f46554") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477345 3311 secret.go:198] Couldn't get secret kube-system/default-token-j8lqn: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477514 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/a7ecb45b-ace6-4460-918a-0dcb92f8cbd2-default-token-j8lqn" ("a7ecb45b-ace6-4460-918a-0dcb92f8cbd2")" failed. No retries permitted until 2019-07-23 08:49:30.9774905 +0000 UTC m=+31.241778801 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "default-token-j8lqn" (UniqueName: "kubernetes.io/secret/a7ecb45b-ace6-4460-918a-0dcb92f8cbd2-default-token-j8lqn") pod "kubernetes-dashboard-7b8ddcb5d6-v8v8t" (UID: "a7ecb45b-ace6-4460-918a-0dcb92f8cbd2") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477549 3311 configmap.go:203] Couldn't get configMap kube-system/kube-proxy: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477587 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/a6a89899-58be-40d5-aeaa-af07a0873c2c-kube-proxy" ("a6a89899-58be-40d5-aeaa-af07a0873c2c")" failed. No retries permitted until 2019-07-23 08:49:30.9775735 +0000 UTC m=+31.241861801 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a6a89899-58be-40d5-aeaa-af07a0873c2c-kube-proxy") pod "kube-proxy-62m5k" (UID: "a6a89899-58be-40d5-aeaa-af07a0873c2c") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477606 3311 secret.go:198] Couldn't get secret kube-system/kube-proxy-token-knrpk: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477652 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/a6a89899-58be-40d5-aeaa-af07a0873c2c-kube-proxy-token-knrpk" ("a6a89899-58be-40d5-aeaa-af07a0873c2c")" failed. No retries permitted until 2019-07-23 08:49:30.9776264 +0000 UTC m=+31.241914701 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kube-proxy-token-knrpk" (UniqueName: "kubernetes.io/secret/a6a89899-58be-40d5-aeaa-af07a0873c2c-kube-proxy-token-knrpk") pod "kube-proxy-62m5k" (UID: "a6a89899-58be-40d5-aeaa-af07a0873c2c") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477670 3311 secret.go:198] Couldn't get secret kube-system/storage-provisioner-token-mp7pw: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477706 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/448796aa-fbf9-4427-81e2-6ad92919386d-storage-provisioner-token-mp7pw" ("448796aa-fbf9-4427-81e2-6ad92919386d")" failed. No retries permitted until 2019-07-23 08:49:30.9776927 +0000 UTC m=+31.241981001 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "storage-provisioner-token-mp7pw" (UniqueName: "kubernetes.io/secret/448796aa-fbf9-4427-81e2-6ad92919386d-storage-provisioner-token-mp7pw") pod "storage-provisioner" (UID: "448796aa-fbf9-4427-81e2-6ad92919386d") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477723 3311 secret.go:198] Couldn't get secret kube-system/coredns-token-65pqz: couldn't propagate object cache: timed out waiting for the condition
  • Jul 23 08:49:30 minikube kubelet[3311]: E0723 08:49:30.477758 3311 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/34c9bed2-afad-4da9-b788-29e2c1f46554-coredns-token-65pqz" ("34c9bed2-afad-4da9-b788-29e2c1f46554")" failed. No retries permitted until 2019-07-23 08:49:30.9777431 +0000 UTC m=+31.242031401 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-65pqz" (UniqueName: "kubernetes.io/secret/34c9bed2-afad-4da9-b788-29e2c1f46554-coredns-token-65pqz") pod "coredns-5c98db65d4-6w2n2" (UID: "34c9bed2-afad-4da9-b788-29e2c1f46554") : couldn't propagate object cache: timed out waiting for the condition"
  • Jul 23 08:49:32 minikube kubelet[3311]: W0723 08:49:32.652984 3311 pod_container_deletor.go:75] Container "7f04606bbc119c9ed56f118be7d07866500c18b90556e7512b34095a21dace8b" not found in pod's containers
  • Jul 23 08:49:32 minikube kubelet[3311]: W0723 08:49:32.679463 3311 pod_container_deletor.go:75] Container "f9fb41a05f37ad479c69fdc34bdd8d04bf722aee3054a12984ef10e2448ba38c" not found in pod's containers
  • Jul 23 08:50:04 minikube kubelet[3311]: E0723 08:50:04.245783 3311 pod_workers.go:190] Error syncing pod 448796aa-fbf9-4427-81e2-6ad92919386d ("storage-provisioner_kube-system(448796aa-fbf9-4427-81e2-6ad92919386d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(448796aa-fbf9-4427-81e2-6ad92919386d)"
  • Jul 23 08:50:04 minikube kubelet[3311]: E0723 08:50:04.270754 3311 pod_workers.go:190] Error syncing pod 4961d461-7e4a-4988-9417-c14671dfa86e ("coredns-5c98db65d4-5t8qw_kube-system(4961d461-7e4a-4988-9417-c14671dfa86e)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-5c98db65d4-5t8qw_kube-system(4961d461-7e4a-4988-9417-c14671dfa86e)"
  • Jul 23 08:50:04 minikube kubelet[3311]: E0723 08:50:04.293666 3311 pod_workers.go:190] Error syncing pod 34c9bed2-afad-4da9-b788-29e2c1f46554 ("coredns-5c98db65d4-6w2n2_kube-system(34c9bed2-afad-4da9-b788-29e2c1f46554)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-5c98db65d4-6w2n2_kube-system(34c9bed2-afad-4da9-b788-29e2c1f46554)"
  • Jul 23 08:50:04 minikube kubelet[3311]: E0723 08:50:04.315375 3311 pod_workers.go:190] Error syncing pod a7ecb45b-ace6-4460-918a-0dcb92f8cbd2 ("kubernetes-dashboard-7b8ddcb5d6-v8v8t_kube-system(a7ecb45b-ace6-4460-918a-0dcb92f8cbd2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-7b8ddcb5d6-v8v8t_kube-system(a7ecb45b-ace6-4460-918a-0dcb92f8cbd2)"
  • Jul 23 08:50:09 minikube kubelet[3311]: E0723 08:50:09.376314 3311 pod_workers.go:190] Error syncing pod 4961d461-7e4a-4988-9417-c14671dfa86e ("coredns-5c98db65d4-5t8qw_kube-system(4961d461-7e4a-4988-9417-c14671dfa86e)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-5c98db65d4-5t8qw_kube-system(4961d461-7e4a-4988-9417-c14671dfa86e)"
  • Jul 23 08:50:11 minikube kubelet[3311]: E0723 08:50:11.778847 3311 pod_workers.go:190] Error syncing pod 34c9bed2-afad-4da9-b788-29e2c1f46554 ("coredns-5c98db65d4-6w2n2_kube-system(34c9bed2-afad-4da9-b788-29e2c1f46554)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-5c98db65d4-6w2n2_kube-system(34c9bed2-afad-4da9-b788-29e2c1f46554)"
  • Jul 23 08:50:12 minikube kubelet[3311]: E0723 08:50:12.286522 3311 pod_workers.go:190] Error syncing pod a7ecb45b-ace6-4460-918a-0dcb92f8cbd2 ("kubernetes-dashboard-7b8ddcb5d6-v8v8t_kube-system(a7ecb45b-ace6-4460-918a-0dcb92f8cbd2)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-7b8ddcb5d6-v8v8t_kube-system(a7ecb45b-ace6-4460-918a-0dcb92f8cbd2)"
  • Jul 23 08:51:30 minikube kubelet[3311]: I0723 08:51:30.953597 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl-certs" (UniqueName: "kubernetes.io/host-path/bba991bd-7492-4c17-aeae-123b7b36c1f4-ssl-certs") pod "heapster-8snbt" (UID: "bba991bd-7492-4c17-aeae-123b7b36c1f4")
  • Jul 23 08:51:30 minikube kubelet[3311]: I0723 08:51:30.954408 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-j8lqn" (UniqueName: "kubernetes.io/secret/bba991bd-7492-4c17-aeae-123b7b36c1f4-default-token-j8lqn") pod "heapster-8snbt" (UID: "bba991bd-7492-4c17-aeae-123b7b36c1f4")
  • Jul 23 08:51:31 minikube kubelet[3311]: I0723 08:51:31.157951 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-j8lqn" (UniqueName: "kubernetes.io/secret/f1fd1146-b4c1-4a63-9072-58a7165c97fc-default-token-j8lqn") pod "influxdb-grafana-59qcg" (UID: "f1fd1146-b4c1-4a63-9072-58a7165c97fc")
  • Jul 23 08:51:31 minikube kubelet[3311]: I0723 08:51:31.158401 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "grafana-storage" (UniqueName: "kubernetes.io/empty-dir/f1fd1146-b4c1-4a63-9072-58a7165c97fc-grafana-storage") pod "influxdb-grafana-59qcg" (UID: "f1fd1146-b4c1-4a63-9072-58a7165c97fc")
  • Jul 23 08:51:31 minikube kubelet[3311]: I0723 08:51:31.159071 3311 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "influxdb-storage" (UniqueName: "kubernetes.io/empty-dir/f1fd1146-b4c1-4a63-9072-58a7165c97fc-influxdb-storage") pod "influxdb-grafana-59qcg" (UID: "f1fd1146-b4c1-4a63-9072-58a7165c97fc")
  • ==> kubernetes-dashboard <==
  • 2019/07/23 08:50:23 Using in-cluster config to connect to apiserver
  • 2019/07/23 08:50:23 Using service account token for csrf signing
  • 2019/07/23 08:50:23 Successful initial request to the apiserver, version: v1.15.0
  • 2019/07/23 08:50:23 Generating JWE encryption key
  • 2019/07/23 08:50:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
  • 2019/07/23 08:50:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
  • 2019/07/23 08:50:24 Initializing JWE encryption key from synchronized object
  • 2019/07/23 08:50:24 Creating in-cluster Heapster client
  • 2019/07/23 08:50:24 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  • 2019/07/23 08:50:24 Serving insecurely on HTTP port: 9090
  • 2019/07/23 08:50:54 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  • 2019/07/23 08:51:24 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  • 2019/07/23 08:51:54 Metric client health check failed: an error on the server ("[-]healthz failed: could not get the latest data batch\nhealthz check failed") has prevented the request from succeeding (get services heapster). Retrying in 30 seconds.
  • 22019/07/23 08:50:23 Starting overwat01c9/h07/23 08:52:24 Successful re
  • quest to heapster
  • ==> storage-provisioner <==

The operating system version:

PS C:\utilities> $PSVersionTable

Name                           Value
----                           -----
PSVersion                      5.1.17763.592
PSEdition                      Desktop
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
BuildVersion                   10.0.17763.592 (Windows 10 Enterprise)
CLRVersion                     4.0.30319.42000
WSManStackVersion              3.0
PSRemotingProtocolVersion      2.3
SerializationVersion           1.1.0.1
@afbjorklund afbjorklund added the co/hyperv HyperV related issues label Jul 29, 2019
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/backlog Higher priority than priority/awaiting-more-evidence. area/addons labels Aug 8, 2019
@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Sep 19, 2019
@tstromberg
Copy link
Contributor

I believe this was fixed by 001c4fc - which is included in minikube v1.4. Please re-open if not.

@blueelvis
Copy link
Contributor Author

Yep, just checked and this is working now. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/addons co/hyperv HyperV related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants