Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cant start minikube dashboard #9473

Closed
tokomeno opened this issue Oct 15, 2020 · 11 comments
Closed

Cant start minikube dashboard #9473

tokomeno opened this issue Oct 15, 2020 · 11 comments
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@tokomeno
Copy link

Steps to reproduce the issue:

  1. minikube start
  2. minikube dashboard

Full output of failed command:

~$ minikube start
πŸ˜„ minikube v1.13.1 on Ubuntu 20.04
✨ Using the docker driver based on user configuration
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”₯ Creating docker container (CPUs=2, Memory=2200MB) ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
πŸ”Ž Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
πŸ„ Done! kubectl is now configured to use "minikube" by default

~$ minikube dashboard --alsologtostderr
I1015 16:49:55.289882 2866294 mustload.go:66] Loading cluster: minikube
I1015 16:49:55.290724 2866294 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1015 16:49:55.348400 2866294 host.go:65] Checking if "minikube" exists ...
I1015 16:49:55.348710 2866294 api_server.go:146] Checking apiserver status ...
I1015 16:49:55.348775 2866294 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I1015 16:49:55.348839 2866294 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1015 16:49:55.404381 2866294 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/tornike/.minikube/machines/minikube/id_rsa Username:docker}
I1015 16:49:55.637629 2866294 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1597/cgroup
I1015 16:49:55.680526 2866294 api_server.go:162] apiserver freezer: "11:freezer:/docker/b4f192fddfe30d3f5ee79c7d868a1068c68ce2be1735cd3a6b14239bb974d0c8/kubepods/burstable/pod6420297fbd837372c0c69d6daa60b210/07c544ae288c834b821cba907be82d699bfed277d3ce8d5eee845eb66579d82c"
I1015 16:49:55.680731 2866294 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/b4f192fddfe30d3f5ee79c7d868a1068c68ce2be1735cd3a6b14239bb974d0c8/kubepods/burstable/pod6420297fbd837372c0c69d6daa60b210/07c544ae288c834b821cba907be82d699bfed277d3ce8d5eee845eb66579d82c/freezer.state
I1015 16:49:55.693913 2866294 api_server.go:184] freezer state: "THAWED"
I1015 16:49:55.693982 2866294 api_server.go:221] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I1015 16:49:55.702640 2866294 api_server.go:241] https://172.17.0.3:8443/healthz returned 200:
ok
W1015 16:49:55.703107 2866294 out.go:145] πŸ€” Verifying dashboard health ...
πŸ€” Verifying dashboard health ...
I1015 16:49:55.720916 2866294 service.go:213] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard 86c9590d-8bf2-4c46-b813-2a34b45181fd 501 0 2020-10-15 16:47:19 +0400 +04 map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2020-10-15 16:47:19 +0400 +04 FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 111 110 109 97 110 97 103 101 114 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 111 100 101 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 105 110 105 107 117 98 101 45 97 100 100 111 110 115 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.126.49,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
W1015 16:49:55.721205 2866294 out.go:145] πŸš€ Launching proxy ...
πŸš€ Launching proxy ...
I1015 16:49:55.721346 2866294 dashboard.go:146] Executing: /snap/bin/kubectl [/snap/bin/kubectl --context minikube proxy --port=0]
I1015 16:49:55.721657 2866294 dashboard.go:151] Waiting for kubectl to output host:port ...
I1015 16:49:55.871044 2866294 out.go:109]

W1015 16:49:55.871207 2866294 out.go:145] ❌ Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
❌ Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
W1015 16:49:55.871274 2866294 out.go:145]

W1015 16:49:55.871352 2866294 out.go:145] 😿 If the above advice does not help, please let us know:
😿 If the above advice does not help, please let us know:
W1015 16:49:55.871407 2866294 out.go:145] πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose
I1015 16:49:55.882998 2866294 out.go:109]

Optional: Full output of minikube logs command:

@medyagh
Copy link
Member

medyagh commented Oct 15, 2020

@tokomeno does deleting and starting it again helps ?
btw we have a new version of minkube with static ip for docker driver

mind trying this?
minikube delete
minkube start

@medyagh
Copy link
Member

medyagh commented Oct 15, 2020

/triage needs-information
/kind support

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Oct 15, 2020
@KrzysztofKowalczyk
Copy link

KrzysztofKowalczyk commented Oct 16, 2020

I seem to have similar issue on Ubuntu 20.04.1 LTS with docker driver.

Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF

$ minikube version
minikube version: v1.14.0
commit: b09ee50ec047410326a85435f4d99026f9c4f5c4

deleting and starting again does not help

minikube logs
logs.txt

@pramchan
Copy link

I discovered that there is kubectl that can be called from minikube and this is what is lists
minikube kubectl get ns
NAME STATUS AGE
default Active 41h
kube-node-lease Active 41h
kube-public Active 41h
kube-system Active 41h
kubernetes-dashboard Active 41h

and I can get
minikube kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 41h v1.19.2
minikube-m02 NotReady 33h v1.19.2
minikube-m03 NotReady 23h v1.19.2

minikube start

  • minikube v1.14.0 on Ubuntu 18.04
  • Using the kvm2 driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Updating the running kvm2 "minikube" VM ...
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
  • Verifying Kubernetes components...
  • Enabled addons: storage-provisioner, default-storageclass, dashboard
  • Starting node minikube-m02 in cluster minikube
  • Updating the running kvm2 "minikube-m02" VM ...
  • Found network options:
    • NO_PROXY=192.168.39.11
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
    • env NO_PROXY=192.168.39.11
  • Verifying Kubernetes components...
  • Starting node minikube-m03 in cluster minikube
  • Updating the running kvm2 "minikube-m03" VM ...
  • Found network options:
    • NO_PROXY=192.168.39.11,192.168.39.233
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
    • env NO_PROXY=192.168.39.11
    • env NO_PROXY=192.168.39.11,192.168.39.233
  • Verifying Kubernetes components...
  • Done! kubectl is now configured to use "minikube" by default
    pramchan@ubuntu131:~$ minikube status
    minikube
    type: Control Plane
    host: Running
    kubelet: Running
    apiserver: Running
    kubeconfig: Configured

minikube-m02
type: Worker
host: Running
kubelet: Running

minikube-m03
type: Worker
host: Running
kubelet: Running

Now that the worker nodes are running let me try figure the type Worker shows up in ROLES through kubelet get nodes and it does not for workers, although ready?

minikube kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 42h v1.19.2
minikube-m02 Ready 34h v1.19.2
minikube-m03 Ready 24h v1.19.2

minikube service list
|----------------------|---------------------------|--------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|---------------------------|--------------|-----|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|---------------------------|--------------|-----|

Now lest see how it behaves for node add deletes out of sequence to prove that ether delete or sequencing of node name created issues for adding and deleting nodes.

$ minikube kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 42h v1.19.2
minikube-m02 Ready 34h v1.19.2
minikube-m03 NotReady 25h v1.19.2
$ minikube node delete minikube-m03

  • Deleting node minikube-m03 from cluster minikube
  • Deleting "minikube-m03" in kvm2 ...
  • Node minikube-m03 was successfully deleted.
    minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    minikube-m02 Ready 34h v1.19.2
    $ minikube node minikube-m03 start

X Exiting due to MK_USAGE: Usage: minikube node [add|start|stop|delete|list]

pramchan@ubuntu131:~$ minikube kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 42h v1.19.2
minikube-m02 Ready 34h v1.19.2
$ minikube node delete minikube-m02

  • Deleting node minikube-m02 from cluster minikube
  • Deleting "minikube-m02" in kvm2 ...
  • Node minikube-m02 was successfully deleted.
    $ minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    $ minikube node add
  • Adding node m02 to cluster minikube
    ! Multi-node clusters are currently experimental and might exhibit unintended behavior.
  • To track progress on multi-node clusters, see Multi-node EnhancementsΒ #7538.
    E1016 01:32:58.568287 17994 register.go:123] unexpected first step: ""
  • Starting node minikube-m02 in cluster minikube
    E1016 01:32:58.792748 17994 register.go:123] unexpected first step: ""
  • Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
    E1016 01:33:11.014908 17994 register.go:123] unexpected first step: ""
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
    E1016 01:33:20.117815 17994 register.go:123] unexpected first step: ""
  • Verifying Kubernetes components...
  • Successfully added m02 to minikube!
    pramchan@ubuntu131:$ minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    minikube-m02 Ready 14s v1.19.2
    pramchan@ubuntu131:
    $ minikube node add
  • Adding node m03 to cluster minikube
    E1016 01:33:41.883647 18263 register.go:123] unexpected first step: ""
  • Starting node minikube-m03 in cluster minikube
    E1016 01:33:42.090567 18263 register.go:123] unexpected first step: ""
  • Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
    E1016 01:33:53.708584 18263 register.go:123] unexpected first step: ""
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
    E1016 01:34:02.696267 18263 register.go:123] unexpected first step: ""
  • Verifying Kubernetes components...
  • Successfully added m03 to minikube!
    pramchan@ubuntu131:~$ minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    minikube-m02 Ready 47s v1.19.2
    minikube-m03 NotReady 4s v1.19.2
    $ minikube node delete minikube-m02
  • Deleting node minikube-m02 from cluster minikube
  • Deleting "minikube-m02" in kvm2 ...
  • Node minikube-m02 was successfully deleted.
    $ minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    minikube-m03 Ready 34s v1.19.2
    $ minikube node add
  • Adding node m03 to cluster minikube
    E1016 01:34:51.521083 18794 register.go:123] unexpected first step: ""
  • Starting node minikube-m03 in cluster minikube
  • Updating the running kvm2 "minikube-m03" VM ...
    E1016 01:34:53.099524 18794 register.go:123] unexpected first step: ""
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
    E1016 01:34:54.696000 18794 register.go:123] unexpected first step: ""
  • Verifying Kubernetes components...
  • Successfully added m03 to minikube!
    $ minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    minikube-m03 Ready 61s v1.19.2
    $ minikube node add
  • Adding node m03 to cluster minikube
    E1016 01:35:15.996448 18935 register.go:123] unexpected first step: ""
  • Starting node minikube-m03 in cluster minikube
  • Updating the running kvm2 "minikube-m03" VM ...
    E1016 01:35:17.544056 18935 register.go:123] unexpected first step: ""
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
    E1016 01:35:18.499857 18935 register.go:123] unexpected first step: ""
  • Verifying Kubernetes components...
  • Successfully added m03 to minikube!
    $ minikube kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    minikube Ready master 42h v1.19.2
    minikube-m03 Ready 82s v1.19.2

Since it is pointing to #7538.
medyagh - can you figure out what in this belongs to 7583 like

  1. missing ROLES on node m02 & m03
  2. Deleting mo3 and trying to add mo2 does not add m02 but continues adding m03 (Some sequence number concatenation logic seems odd?
  3. If delete were correct why a node add not succeed for m02, something not cleaned up on delete m02.
    Even based on these 3 you can add details under 7538, then we can declare this as duplicate and work on 7538 to fix it, or otherwise rebuild the code for fixing 3 aspects and close it pointing as fixed as part of 7538, your call, but this definitely is an code logic issue and if you are a coder sure go ahead assign to self and fix it or document your analysis into 7538 . Thx pramchan

@tokomeno
Copy link
Author

tokomeno commented Oct 18, 2020

@tokomeno does deleting and starting it again helps ?
btw we have a new version of minkube with static ip for docker driver

mind trying this?
minikube delete
minkube start

I have tried it many times but does not helped.
I have updated minikube version to the latest but still same error, can't find any similar issue... still trying to fix but nothing comes up :(

$ minikube start
πŸ˜„ minikube v1.14.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”„ Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
πŸ”Ž Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner, dashboard
πŸ„ Done! kubectl is now configured to use "minikube" by default

$ minikube dashboard --alsologtostderr
I1018 19:43:29.425876 871455 mustload.go:66] Loading cluster: minikube
I1018 19:43:29.427389 871455 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1018 19:43:29.488610 871455 host.go:65] Checking if "minikube" exists ...
I1018 19:43:29.488979 871455 api_server.go:146] Checking apiserver status ...
I1018 19:43:29.489062 871455 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I1018 19:43:29.489117 871455 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1018 19:43:29.543867 871455 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/tornike/.minikube/machines/minikube/id_rsa Username:docker}
I1018 19:43:29.722647 871455 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1822/cgroup
I1018 19:43:29.733928 871455 api_server.go:162] apiserver freezer: "5:freezer:/docker/a08bdb0974359ff27d38c10eafd8d022299439d00c5edb7d53c54927a3b34d4d/kubepods/burstable/podf7c3d51df5e2ce4e433b64661ac4503c/06c869c124955e32103c4ed54619c63e81b398c196a6b76a8043299ac76bfad6"
I1018 19:43:29.734028 871455 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/a08bdb0974359ff27d38c10eafd8d022299439d00c5edb7d53c54927a3b34d4d/kubepods/burstable/podf7c3d51df5e2ce4e433b64661ac4503c/06c869c124955e32103c4ed54619c63e81b398c196a6b76a8043299ac76bfad6/freezer.state
I1018 19:43:29.743339 871455 api_server.go:184] freezer state: "THAWED"
I1018 19:43:29.743374 871455 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I1018 19:43:29.750126 871455 api_server.go:241] https://192.168.49.2:8443/healthz returned 200:
ok
W1018 19:43:29.750540 871455 out.go:145] πŸ€” Verifying dashboard health ...
πŸ€” Verifying dashboard health ...
I1018 19:43:29.769928 871455 service.go:213] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard c2f614ba-13b0-4e0b-ab97-3b303180e013 26037 0 2020-10-18 19:31:24 +0400 +04 map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2020-10-18 19:31:24 +0400 +04 FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 111 110 109 97 110 97 103 101 114 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 111 100 101 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 105 110 105 107 117 98 101 45 97 100 100 111 110 115 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.187.24,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
W1018 19:43:29.770251 871455 out.go:145] πŸš€ Launching proxy ...
πŸš€ Launching proxy ...
I1018 19:43:29.770378 871455 dashboard.go:146] Executing: /snap/bin/kubectl [/snap/bin/kubectl --context minikube proxy --port=0]
I1018 19:43:29.770664 871455 dashboard.go:151] Waiting for kubectl to output host:port ...
I1018 19:43:29.932132 871455 out.go:109]

W1018 19:43:29.932920 871455 out.go:145] ❌ Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
❌ Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
W1018 19:43:29.933088 871455 out.go:145]

W1018 19:43:29.933252 871455 out.go:145] 😿 If the above advice does not help, please let us know:
😿 If the above advice does not help, please let us know:
W1018 19:43:29.933426 871455 out.go:145] πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose
I1018 19:43:29.944875 871455 out.go:109]

@tokomeno
Copy link
Author

tokomeno commented Oct 18, 2020

My bad sorry guys, looks like I was using Kubernetes with microk8s, I have removed it and it works fine.

@kdraper69
Copy link

My bad sorry guys, looks like I was using Kubernetes with microk8s, I have removed it and it works fine.

How did you remove the microK8s?

@abdelouahabb
Copy link

My bad sorry guys, looks like I was using Kubernetes with microk8s, I have removed it and it works fine.

same here, the issue was with k3s , to remove it just make /usr/local/bin/k3s-uninstall.sh, after i removed it, minikube dashboard worked !

@lmx1989219
Copy link

My bad sorry guys, looks like I was using Kubernetes with microk8s, I have removed it and it works fine.

same here, the issue was with k3s , to remove it just make /usr/local/bin/k3s-uninstall.sh, after i removed it, minikube dashboard worked !

yes.i need the same. thanks!

@khaninejad
Copy link

For Windows users change the visualization. for me this worked:
minikube start --vm-driver=docker

@ciprian-radu
Copy link

Worked for me after I did

minikube addons enable metrics-server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

10 participants