Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dashboard: Add Node condition check (DiskPressure and pod status checks) before opening #5815

Open
GarretSidzaka opened this issue Nov 2, 2019 · 50 comments
Labels
co/dashboard dashboard related issues co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@GarretSidzaka
Copy link

GarretSidzaka commented Nov 2, 2019

when i run it always hangs forever and sometimes throws 503

garretsidzaka@$$$$$:~$ sudo minikube dashboard

Verifying dashboard health ...
Launching proxy ...
Verifying proxy health ...

^C
garretsidzaka@$$$$$:~$

garretsidzaka@$$$$$$:/usr/bin$ minikube version
minikube version: v1.4.0
commit: 7969c25
ubuntu vm 1804
vm-driver=none

@GarretSidzaka
Copy link
Author

ping?

@medyagh
Copy link
Member

medyagh commented Nov 4, 2019

@GarretSidzaka
Thank you for sharing your experience! If you don't mind, could you please provide:

  • The exact command-lines used, so that we may replicate the issue
  • The full output of the command that failed
  • The full output of the "minikube logs" command
  • Which operating system version was used

This will help us isolate the problem further. Thank you!

and additionally, I wonder

do you use a corp network or VPN or proxy ?

@medyagh medyagh added triage/needs-information Indicates an issue needs more information in order to work on it. area/networking networking issues labels Nov 4, 2019
@GarretSidzaka
Copy link
Author

Bullet one:
sudo minikube start
sudo minikube dashboard

Bullet two:
garretsidzaka@$$$$$:~$ sudo minikube dashboard

Verifying dashboard health ...
Launching proxy ...
Verifying proxy health ...

^C
garretsidzaka@$$$$$:~$

Bullet three:
garretsidzaka@cloudstack:/$ sudo minikube logs
*
X Error getting config: stat /home/garretsidzaka/.minikube/profiles/minikube/config.json: no such file or directory
*

Bullet Four:
Ubuntu 18.04.3

The answer to your last question is no.

@medyagh
Copy link
Member

medyagh commented Nov 6, 2019

@GarretSidzaka would you please share the full output of
sudo minikube start --alsologtostderr -v=8

@medyagh
Copy link
Member

medyagh commented Nov 6, 2019

@GarretSidzaka I am curious when you said it hangs forever, did you mean in terminal it is stuck at this ?

medya@~/workspace/minikube (clean_cron) $ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:65504/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

if that is the case, then that is the expected behaviour, minikube will hang there and run a webserver, so you can access the dashboard in your browser

@medyagh medyagh changed the title Mini kube won't open dashboard in Docker mode minikube won't open dashboard in Docker mode Nov 6, 2019
@GarretSidzaka
Copy link
Author

@GarretSidzaka would you please share the full output of
sudo minikube start --alsologtostderr -v=8

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-66-generic x86_64)

System information as of Wed Nov 6 00:33:39 UTC 2019

System load: 0.0 Processes: 207
Usage of /: 15.0% of 72.83GB Users logged in: 0
Memory usage: 48% IP address for eth0: 66.55.156.94
Swap usage: 0% IP address for docker0: 172.17.0.1

6 packages can be updated.
0 updates are security updates.

Last login: Mon Nov 4 23:29:43 2019 from 71.209.166.96
garretsidzaka@cloudstack:~$ sudo minikube start --alsologtostderr -v=8
[sudo] password for garretsidzaka:
I1106 00:34:13.321761 183139 notify.go:125] Checking for updates...
I1106 00:34:13.502222 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/last_update_check" with filemode -rw-r--r--

I1106 00:34:13.503829 183139 start.go:236] hostinfo: {"hostname":"cloudstack","uptime":931694,"bootTime":1572068759,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-66-generic","virtualizationSystem":"","virtualizationRole":"","hostid":"73fa3e87-061f-4111-9c29-9a2074fc4bec"}
I1106 00:34:13.504201 183139 start.go:246] virtualization:
! minikube v1.4.0 on Ubuntu 18.04
I1106 00:34:13.504790 183139 profile.go:66] Saving config to /home/garretsidzaka/.minikube/profiles/minikube/config.json ...
I1106 00:34:13.504883 183139 cache_images.go:295] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1106 00:34:13.504927 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1106 00:34:13.504952 183139 cache_images.go:297] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 74.699µs
I1106 00:34:13.505046 183139 cache_images.go:82] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1106 00:34:13.505085 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
I1106 00:34:13.505125 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
I1106 00:34:13.505139 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 completed in 66.699µs
I1106 00:34:13.505171 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-proxy:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
I1106 00:34:13.505210 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
I1106 00:34:13.505287 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
I1106 00:34:13.505300 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 completed in 94.698µs
I1106 00:34:13.505363 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-scheduler:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
I1106 00:34:13.505403 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
I1106 00:34:13.505445 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
I1106 00:34:13.505460 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 completed in 60.799µs
I1106 00:34:13.505511 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
I1106 00:34:13.505550 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
I1106 00:34:13.505593 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
I1106 00:34:13.505724 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 completed in 178.797µs
I1106 00:34:13.505797 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-apiserver:v1.16.0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
I1106 00:34:13.505751 183139 cache_images.go:295] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1106 00:34:13.505905 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1106 00:34:13.505950 183139 cache_images.go:297] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 204.497µs
I1106 00:34:13.505991 183139 cache_images.go:82] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
I1106 00:34:13.505796 183139 cache_images.go:295] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4
I1106 00:34:13.506091 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 exists
I1106 00:34:13.506134 183139 cache_images.go:297] CacheImage: kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 completed in 354.195µs
I1106 00:34:13.506175 183139 cache_images.go:82] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 succeeded
I1106 00:34:13.505646 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1106 00:34:13.506260 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1106 00:34:13.506301 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 654.591µs
I1106 00:34:13.505666 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1106 00:34:13.506373 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1106 00:34:13.506390 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 729.289µs
I1106 00:34:13.506406 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1106 00:34:13.505715 183139 cache_images.go:295] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1106 00:34:13.505775 183139 cache_images.go:295] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
I1106 00:34:13.506562 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 exists
I1106 00:34:13.506581 183139 cache_images.go:297] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 completed in 813.489µs
I1106 00:34:13.506597 183139 cache_images.go:82] CacheImage k8s.gcr.io/kube-addon-manager:v9.0.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 succeeded
I1106 00:34:13.505785 183139 cache_images.go:295] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1106 00:34:13.506642 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1106 00:34:13.506656 183139 cache_images.go:297] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 872.988µs
I1106 00:34:13.506662 183139 cache_images.go:82] CacheImage k8s.gcr.io/coredns:1.6.2 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1106 00:34:13.505622 183139 cache_images.go:295] CacheImage: k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1106 00:34:13.506421 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1106 00:34:13.506690 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1106 00:34:13.506719 183139 cache_images.go:297] CacheImage: k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 1.094384ms
I1106 00:34:13.506734 183139 cache_images.go:82] CacheImage k8s.gcr.io/pause:3.1 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1106 00:34:13.506449 183139 cache_images.go:301] /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1106 00:34:13.506764 183139 cache_images.go:297] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 1.054385ms
I1106 00:34:13.506773 183139 cache_images.go:82] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1106 00:34:13.506785 183139 cache_images.go:89] Successfully cached all images.
I1106 00:34:13.518345 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/profiles/minikube/config.json" with filemode -rw-------
I1106 00:34:13.518649 183139 cluster.go:93] Machine does not exist... provisioning new machine
I1106 00:34:13.518671 183139 cluster.go:94] Provisioning machine with config: {KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.4.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true}

  • Running on localhost (CPUs=4, Memory=8027MB, Disk=74576MB) ...
  • OS release is Ubuntu 18.04.3 LTS
    I1106 00:34:13.532688 183139 profile.go:66] Saving config to /home/garretsidzaka/.minikube/profiles/minikube/config.json ...
    I1106 00:34:13.532744 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/profiles/minikube/config.json.tmp098087967" with filemode -rw-------
    I1106 00:34:13.532996 183139 exec_runner.go:40] Run: sudo systemctl start docker
    I1106 00:34:13.548182 183139 exec_runner.go:51] Run with output: docker version --format '{{.Server.Version}}'
  • Preparing Kubernetes v1.16.0 on Docker 18.09.7 ...
    I1106 00:34:14.479840 183139 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:}
    I1106 00:34:14.480318 183139 settings.go:132] Updating kubeconfig: /home/garretsidzaka/.kube/config
    I1106 00:34:14.493753 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.kube/config" with filemode -rw-------
    • kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
      I1106 00:34:14.494246 183139 cache_images.go:95] LoadImages start: [k8s.gcr.io/kube-proxy:v1.16.0 k8s.gcr.io/kube-scheduler:v1.16.0 k8s.gcr.io/kube-controller-manager:v1.16.0 k8s.gcr.io/kube-apiserver:v1.16.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 kubernetesui/dashboard:v2.0.0-beta4 k8s.gcr.io/kube-addon-manager:v9.0.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1]
      I1106 00:34:14.494385 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
      I1106 00:34:14.494399 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
      I1106 00:34:14.494418 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
      I1106 00:34:14.494455 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
      I1106 00:34:14.494468 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 -> /var/lib/minikube/images/kube-scheduler_v1.16.0
      I1106 00:34:14.494485 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
      I1106 00:34:14.494498 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
      I1106 00:34:14.494510 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 -> /var/lib/minikube/images/coredns_1.6.2
      I1106 00:34:14.494515 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 -> /var/lib/minikube/images/kube-addon-manager_v9.0.2
      I1106 00:34:14.494534 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 -> /var/lib/minikube/images/storage-provisioner_v1.8.1
      I1106 00:34:14.494473 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4
      I1106 00:34:14.494462 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
      I1106 00:34:14.514574 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
      I1106 00:34:14.494487 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
      I1106 00:34:14.514717 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
      I1106 00:34:14.494401 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
      I1106 00:34:14.514816 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 -> /var/lib/minikube/images/kube-apiserver_v1.16.0
      I1106 00:34:14.494433 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 -> /var/lib/minikube/images/kube-proxy_v1.16.0
      I1106 00:34:14.494389 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
      I1106 00:34:14.515105 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 -> /var/lib/minikube/images/kube-controller-manager_v1.16.0
      I1106 00:34:14.494388 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
      I1106 00:34:14.515270 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 -> /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
      I1106 00:34:14.494447 183139 cache_images.go:210] Loading image from cache: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1
      I1106 00:34:14.515408 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1
      I1106 00:34:14.494475 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 -> /var/lib/minikube/images/etcd_3.3.15-0
      I1106 00:34:14.514519 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 -> /var/lib/minikube/images/dashboard_v2.0.0-beta4
      I1106 00:34:14.753211 183139 docker.go:97] Loading image: /var/lib/minikube/images/pause_3.1
      I1106 00:34:14.753251 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/pause_3.1
      I1106 00:34:16.322700 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache
      I1106 00:34:16.322739 183139 docker.go:97] Loading image: /var/lib/minikube/images/coredns_1.6.2
      I1106 00:34:16.322786 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/coredns_1.6.2
      I1106 00:34:16.585207 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 from cache
      I1106 00:34:16.585250 183139 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
      I1106 00:34:16.585276 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
      I1106 00:34:16.806829 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 from cache
      I1106 00:34:16.806896 183139 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
      I1106 00:34:16.806915 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
      I1106 00:34:17.004999 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache
      I1106 00:34:17.005077 183139 docker.go:97] Loading image: /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
      I1106 00:34:17.005095 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
      I1106 00:34:17.171361 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 from cache
      I1106 00:34:17.171401 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-addon-manager_v9.0.2
      I1106 00:34:17.171425 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-addon-manager_v9.0.2
      I1106 00:34:17.373411 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2 from cache
      I1106 00:34:17.373454 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-scheduler_v1.16.0
      I1106 00:34:17.373519 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.16.0
      I1106 00:34:17.538912 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 from cache
      I1106 00:34:17.538959 183139 docker.go:97] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1
      I1106 00:34:17.538972 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v1.8.1
      I1106 00:34:17.727837 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache
      I1106 00:34:17.727878 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-proxy_v1.16.0
      I1106 00:34:17.727917 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.16.0
      I1106 00:34:17.928240 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 from cache
      I1106 00:34:17.928280 183139 docker.go:97] Loading image: /var/lib/minikube/images/dashboard_v2.0.0-beta4
      I1106 00:34:17.928296 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/dashboard_v2.0.0-beta4
      I1106 00:34:18.135040 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 from cache
      I1106 00:34:18.135083 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.16.0
      I1106 00:34:18.135098 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.16.0
      I1106 00:34:18.364544 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 from cache
      I1106 00:34:18.364583 183139 docker.go:97] Loading image: /var/lib/minikube/images/etcd_3.3.15-0
      I1106 00:34:18.364604 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/etcd_3.3.15-0
      I1106 00:34:18.962958 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 from cache
      I1106 00:34:18.963000 183139 docker.go:97] Loading image: /var/lib/minikube/images/kube-apiserver_v1.16.0
      I1106 00:34:18.963008 183139 exec_runner.go:40] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.16.0
      I1106 00:34:19.223237 183139 cache_images.go:236] Successfully loaded image /home/garretsidzaka/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 from cache
      I1106 00:34:19.223316 183139 cache_images.go:119] Successfully loaded all cached images.
      I1106 00:34:19.223360 183139 cache_images.go:120] LoadImages end
      I1106 00:34:19.223592 183139 kubeadm.go:610] kubelet v1.16.0 config:
      [Unit]
      Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf

[Install]
I1106 00:34:19.223625 183139 exec_runner.go:40] Run: pgrep kubelet && sudo systemctl stop kubelet
W1106 00:34:19.257686 183139 kubeadm.go:615] unable to stop kubelet: running command: pgrep kubelet && sudo systemctl stop kubelet: exit status 1
I1106 00:34:19.258065 183139 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm
I1106 00:34:19.258089 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/v1.16.0/kubeadm -> /var/lib/minikube/binaries/v1.16.0/kubeadm
I1106 00:34:19.258075 183139 cache_binaries.go:63] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet
I1106 00:34:19.258211 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/cache/v1.16.0/kubelet -> /var/lib/minikube/binaries/v1.16.0/kubelet
I1106 00:34:19.862786 183139 exec_runner.go:40] Run: sudo systemctl daemon-reload && sudo systemctl start kubelet
I1106 00:34:20.089159 183139 certs.go:71] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:}
I1106 00:34:20.089310 183139 certs.go:79] Setting up /home/garretsidzaka/.minikube for IP: 66.55.156.94
I1106 00:34:20.089412 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/client.crt with IP's: []
I1106 00:34:20.099224 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/client.crt ...
I1106 00:34:20.099264 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/client.crt" with filemode -rw-r--r--
I1106 00:34:20.099549 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/client.key ...
I1106 00:34:20.099565 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/client.key" with filemode -rw-------
I1106 00:34:20.099683 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/apiserver.crt with IP's: [66.55.156.94 10.96.0.1 10.0.0.1]
I1106 00:34:20.107698 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/apiserver.crt ...
I1106 00:34:20.107730 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/apiserver.crt" with filemode -rw-r--r--
I1106 00:34:20.107995 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/apiserver.key ...
I1106 00:34:20.108028 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/apiserver.key" with filemode -rw-------
I1106 00:34:20.108174 183139 crypto.go:69] Generating cert /home/garretsidzaka/.minikube/proxy-client.crt with IP's: []
I1106 00:34:20.113461 183139 crypto.go:157] Writing cert to /home/garretsidzaka/.minikube/proxy-client.crt ...
I1106 00:34:20.114504 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/proxy-client.crt" with filemode -rw-r--r--
I1106 00:34:20.115320 183139 crypto.go:165] Writing key to /home/garretsidzaka/.minikube/proxy-client.key ...
I1106 00:34:20.115358 183139 lock.go:41] attempting to write to file "/home/garretsidzaka/.minikube/proxy-client.key" with filemode -rw-------
I1106 00:34:20.115979 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1106 00:34:20.116130 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1106 00:34:20.116470 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1106 00:34:20.116543 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1106 00:34:20.116596 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1106 00:34:20.116649 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1106 00:34:20.116676 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1106 00:34:20.116701 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1106 00:34:20.125897 183139 vm_assets.go:82] NewFileAsset: /home/garretsidzaka/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1106 00:34:20.131832 183139 exec_runner.go:40] Run: which openssl
I1106 00:34:20.133923 183139 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/minikubeCA.pem'
I1106 00:34:20.142240 183139 exec_runner.go:51] Run with output: openssl x509 -hash -noout -in '/usr/share/ca-certificates/minikubeCA.pem'
I1106 00:34:20.167041 183139 exec_runner.go:40] Run: sudo test -f '/etc/ssl/certs/b5213941.0'

  • Pulling images ...
    I1106 00:34:20.173568 183139 exec_runner.go:40] Run: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml
  • Launching Kubernetes ...
    I1106 00:34:25.036103 183139 kubeadm.go:232] StartCluster: {KubernetesVersion:v1.16.0 NodeIP:66.55.156.94 NodePort:8443 NodeName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:true EnableDefaultCNI:false}
    I1106 00:34:25.036210 183139 exec_runner.go:51] Run with output: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
    I1106 00:34:48.752060 183139 kubeadm.go:273] Configuring cluster permissions ...
    I1106 00:34:48.755735 183139 kapi.go:58] client config for minikube: &rest.Config{Host:"https://66.55.156.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/garretsidzaka/.minikube/client.crt", KeyFile:"/home/garretsidzaka/.minikube/client.key", CAFile:"/home/garretsidzaka/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)}, UserAgent:"", Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x159bb40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
    I1106 00:34:48.806431 183139 util.go:67] duration metric: took 48.179105ms to wait for elevateKubeSystemPrivileges.
    I1106 00:34:48.806536 183139 exec_runner.go:51] Run with output: cat /proc/$(pgrep kube-apiserver)/oom_adj
    I1106 00:34:48.822242 183139 kubeadm.go:299] apiserver oom_adj: -16
    I1106 00:34:48.822378 183139 kubeadm.go:234] StartCluster complete in 23.786197888s
  • Configuring local host environment ...

! The 'none' driver provides limited isolation and may reduce system security and reliability.
! For more information, see:

! kubectl and minikube configuration will be stored in /home/garretsidzaka
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
*

  • sudo mv /home/garretsidzaka/.kube /home/garretsidzaka/.minikube $HOME
  • sudo chown -R $USER $HOME/.kube $HOME/.minikube
  • This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true

  • Waiting for: apiserverI1106 00:34:48.822799 183139 kubeadm.go:454] Waiting for apiserver process ...
    I1106 00:34:48.822811 183139 exec_runner.go:40] Run: sudo pgrep kube-apiserver
    I1106 00:34:48.836640 183139 kubeadm.go:469] Waiting for apiserver to port healthy status ...
    I1106 00:34:48.843098 183139 kubeadm.go:156] https://66.55.156.94:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[2] Content-Type:[text/plain; charset=utf-8] Date:[Wed, 06 Nov 2019 00:34:48 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00023abc0 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003db500 TLS:0xc0000da9a0}
    I1106 00:34:48.843155 183139 kubeadm.go:472] apiserver status: Running, err:
    I1106 00:34:48.843196 183139 kubeadm.go:451] duration metric: took 20.397306ms to wait for apiserver status ...
    I1106 00:34:48.843910 183139 kapi.go:58] client config for minikube: &rest.Config{Host:"https://66.55.156.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/garretsidzaka/.minikube/client.crt", KeyFile:"/home/garretsidzaka/.minikube/client.key", CAFile:"/home/garretsidzaka/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)}, UserAgent:"", Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x159bb40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
    proxyI1106 00:34:48.855030 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ...
    I1106 00:34:48.888057 183139 kapi.go:85] Found 0 Pods for label selector k8s-app=kube-proxy
    I1106 00:34:54.893546 183139 kapi.go:85] Found 1 Pods for label selector k8s-app=kube-proxy
    I1106 00:34:54.893719 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:55.427204 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:55.893749 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:56.391819 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:56.948005 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:57.435927 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:57.891800 183139 kapi.go:95] waiting for pod "k8s-app=kube-proxy", current state: Pending: []
    I1106 00:34:58.396266 183139 kapi.go:107] duration metric: took 9.540858918s to wait for k8s-app=kube-proxy ...
    etcdI1106 00:34:58.396347 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "component=etcd" ...
    I1106 00:34:58.413918 183139 kapi.go:85] Found 0 Pods for label selector component=etcd
    I1106 00:36:02.416249 183139 kapi.go:85] Found 1 Pods for label selector component=etcd
    I1106 00:36:02.416274 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:02.918124 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:03.416366 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:03.917137 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:04.417727 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:04.917438 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:05.416793 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:05.916535 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:06.416567 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:06.916571 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:07.418177 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:07.919132 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:08.416704 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:08.916335 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:09.416439 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:09.917663 183139 kapi.go:95] waiting for pod "component=etcd", current state: Pending: []
    I1106 00:36:10.417931 183139 kapi.go:107] duration metric: took 1m12.021584246s to wait for component=etcd ...
    schedulerI1106 00:36:10.418024 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ...
    I1106 00:36:10.424973 183139 kapi.go:85] Found 1 Pods for label selector component=kube-scheduler
    I1106 00:36:10.425006 183139 kapi.go:107] duration metric: took 6.9829ms to wait for component=kube-scheduler ...
    controllerI1106 00:36:10.425077 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ...
    I1106 00:36:10.430103 183139 kapi.go:85] Found 1 Pods for label selector component=kube-controller-manager
    I1106 00:36:10.430133 183139 kapi.go:107] duration metric: took 5.055327ms to wait for component=kube-controller-manager ...
    dnsI1106 00:36:10.430164 183139 kapi.go:74] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ...
    I1106 00:36:10.433305 183139 kapi.go:85] Found 2 Pods for label selector k8s-app=kube-dns
    I1106 00:36:10.433334 183139 kapi.go:107] duration metric: took 3.168055ms to wait for k8s-app=kube-dns ...

  • Done! kubectl is now configured to use "minikube"
    garretsidzaka@cloudstack:~$

@GarretSidzaka
Copy link
Author

GarretSidzaka commented Nov 6, 2019

@GarretSidzaka I am curious when you said it hangs forever, did you mean in terminal it is stuck at this ?

medya@~/workspace/minikube (clean_cron) $ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:65504/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

if that is the case, then that is the expected behaviour, minikube will hang there and run a webserver, so you can access the dashboard in your browser

i cannot load a local browser on a production headless VM, there is no X-server
hello the terminal is only getting this far now.

garretsidzaka@cloudstack:~$ sudo minikube dashboard

@medyagh
Copy link
Member

medyagh commented Nov 6, 2019

r on a production headless VM, there is no X-server
hello the terminal is only getting this far now.

the point of the dashboard is actually a UI experience for kubernetes, if you dont have a browser, you might not actually need dashboard.

do you mind sharing the output of curl for the dashboard url in a separate terminal ?

btw do you happen to use vpn or proxies ?

@tstromberg tstromberg changed the title minikube won't open dashboard in Docker mode sudo minikube dashboard: hangs at "Verifying proxy health..." Nov 6, 2019
@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Nov 6, 2019
@tstromberg
Copy link
Contributor

The output of this command would be helpful for us to help with debugging:

sudo minikube dashboard --alsologtostderr -v=1

It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.

@tstromberg tstromberg added co/dashboard dashboard related issues and removed area/networking networking issues labels Nov 6, 2019
@GarretSidzaka
Copy link
Author

r on a production headless VM, there is no X-server
hello the terminal is only getting this far now.

the point of the dashboard is actually a UI experience for kubernetes, if you dont have a browser, you might not actually need dashboard.

do you mind sharing the output of curl for the dashboard url in a separate terminal ?

btw do you happen to use vpn or proxies ?

no unusual network. this is a front end bridged VM, production style. this network port has a static IP that is IANA, not NAT. there is no proxy or VPN. and yes its very nice to have this kind of research VM

@GarretSidzaka
Copy link
Author

The output of this command would be helpful for us to help with debugging:

sudo minikube dashboard --alsologtostderr -v=1

It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.

yes and when i click such a link after obviously replacing 127 with the actual IP address, it gives 503.

attached is the log you requested
teraterm.log

@GarretSidzaka
Copy link
Author

Ping :3

@priyawadhwa
Copy link

Hey @GarretSidzaka -- looks like there are a couple of related issues (#4352 and #4749).

I see you already commented on #4352, and I'm guessing none of those solutions fixed your issue?

#4749 suggests increasing memory/cpu allocation to minikube. Perhaps you could give that a try? Please let us know the results of any of these experiments!

@sebinsua
Copy link

The output of this command would be helpful for us to help with debugging:

sudo minikube dashboard --alsologtostderr -v=1

It's worth noting that with sudo, the dashboard command will only output a URL rather than attempting to open a browser.

@tstromberg

sudo minikube dashboard --alsologtostderr -v=1

I1113 18:16:48.105064   30332 none.go:257] checking for running kubelet ...
I1113 18:16:48.105097   30332 exec_runner.go:42] (ExecRunner) Run:  systemctl is-active --quiet service kubelet
🤔  Verifying dashboard health ...
I1113 18:16:48.136357   30332 service.go:236] Found service: &Service{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kubernetes-dashboard,GenerateName:,Namespace:kubernetes-dashboard,SelfLink:/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard,UID:50ffb80e-1e61-41a3-8ee6-15fec08c9d0c,ResourceVersion:384,Generation:0,CreationTimestamp:2019-11-13 18:08:36 +0000 GMT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,k8s-app: kubernetes-dashboard,kubernetes.io/minikube-addons: dashboard,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ServiceSpec{Ports:[{ TCP 80 {0 9090 } 0}],Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.8.235,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[],},},}
🚀  Launching proxy ...
I1113 18:16:48.136574   30332 dashboard.go:167] Executing: /usr/bin/kubectl [/usr/bin/kubectl --context minikube proxy --port=0]
I1113 18:16:48.136855   30332 dashboard.go:172] Waiting for kubectl to output host:port ...
I1113 18:16:48.294744   30332 dashboard.go:190] proxy stdout: Starting to serve on 127.0.0.1:43299
🤔  Verifying proxy health ...
I1113 18:16:48.304599   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:48 GMT]] Body:0xc00035d840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aa800 TLS:<nil>}
I1113 18:16:49.412722   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:49 GMT]] Body:0xc000297100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012ed00 TLS:<nil>}
I1113 18:16:51.579128   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:51 GMT]] Body:0xc0002971c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00012ee00 TLS:<nil>}
I1113 18:16:54.225172   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:54 GMT]] Body:0xc0003cf940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000486300 TLS:<nil>}
I1113 18:16:57.412075   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:16:57 GMT]] Body:0xc00035d9c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aa900 TLS:<nil>}
I1113 18:17:02.101728   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 13 Nov 2019 18:17:02 GMT]] Body:0xc0003cfa80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004aaa00 TLS:<nil>}
I1113 18:17:11.123045   30332 dashboard.go:227] http://127.0.0.1:43299/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:

And if I hit the URL which is giving 503, I get:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "no endpoints available for service \"http:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

@tstromberg
Copy link
Contributor

Thanks for the update @sebinsua

Based on what you've shared, I now suspect that part of the issue may be that we are attempting to check the URL before checking if the pod is actually running. Why the dashboard service isn't healthy though, we still need to investigate. Do you mind helping us root cause this?

Once you see the dashboard hanging at "Verifying proxy health ...", can you get the output of and share it with us?

  • kubectl get po -n kubernetes-dashboard --show-labels
  • kubectl describe po -l k8s-app=kubernetes-dashboard -n kubernetes-dashboard
  • kubectl logs -l k8s-app=kubernetes-dashboard -n kubernetes-dashboard

Depending on what you share, I believe that part of the solution may be to insert a new health check that blocks until the pod is in 'Running' state, by calling client.CoreV1().Pods(ns).List() and checking that pod.Status.Phase == core.PodRunning before checking for the port here:

What the

@GarretSidzaka
Copy link
Author

GarretSidzaka commented Nov 14, 2019

output.txt

these results were taking after the bug was replicated, in a separate SSH terminal. At the same time the proxy message was hanging in the other SSH window

this this outputted eventually in the main window:
X http://127.0.0.1:42337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503
garretsidzaka@cloudstack:~$

@tstromberg
Copy link
Contributor

tstromberg commented Nov 14, 2019 via email

@GarretSidzaka
Copy link
Author

Interesting. Do you mind adding the output of 'kubectl describe node' as well? Thanks!

On Wed, Nov 13, 2019, 6:19 PM GarretSidzaka @.***> wrote: output.txt https://github.com/kubernetes/minikube/files/3844296/output.txt — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#5815?email_source=notifications&email_token=AAAYYMA76IJ6FPNETGAYGETQTSYTTA5CNFSM4JIBWDFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEALAAA#issuecomment-553693184>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAYYMBP527WNFL3THBZR4DQTSYTTANCNFSM4JIBWDFA .

sudo kubectl describe node
[sudo] password for garretsidzaka:
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 06 Nov 2019 00:34:43 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 14 Nov 2019 03:36:59 +0000 Wed, 06 Nov 2019 00:34:43 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 66.55.156.94
Hostname: minikube
Capacity:
cpu: 4
ephemeral-storage: 76366628Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8220652Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 70379484249
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8118252Ki
pods: 110
System Info:
Machine ID: b2bbbdb30f0c427595b8a91758ac298c
System UUID: 73FA3E87-061F-4111-9C29-9A2074FC4BEC
Boot ID: 369c5c7e-06d5-46e9-87f5-b597ebadce65
Kernel Version: 4.15.0-66-generic
OS Image: Ubuntu 18.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.7
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-5644d7b6d9-hpm6t 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 8d
kube-system coredns-5644d7b6d9-m2rpm 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 8d
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-addon-manager-minikube 5m (0%) 0 (0%) 50Mi (0%) 0 (0%) 8d
kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-proxy-xcw6z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-m2sn9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kubernetes-dashboard kubernetes-dashboard-57f4cb4545-vkwpj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 755m (18%) 0 (0%)
memory 190Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
garretsidzaka@cloudstack:~$

@GarretSidzaka
Copy link
Author

hi 😍

@cxl123156
Copy link

have this issue solved?i got the same problem.

@GarretSidzaka
Copy link
Author

thoughts on this issue?

@GarretSidzaka
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 6, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 3, 2021
@medyagh medyagh removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 10, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 10, 2021
@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jun 23, 2021
@sharifelgamal sharifelgamal changed the title dashboard: Add Node condition check (DiskPressure and pod status checks) before openning dashboard: Add Node condition check (DiskPressure and pod status checks) before opening Jun 23, 2021
@sharifelgamal
Copy link
Collaborator

This is still an issue we'd like to resolve. Help wanted!

@EricoCartmanez
Copy link

EricoCartmanez commented Jul 6, 2021

In my case canceling (ctrl+C) and executing the same command (minikube dashboard) on a new terminal, worked.
(I know this is not a solution but give it a try)

@Igetin
Copy link

Igetin commented Jul 9, 2021

Had the same problem. Tried turning off my VPN, didn’t help. Tried minikube start and minikube stop, same problem. Tried restarting the whole machine — didn’t seem to help at first, but after trying it again in about ~5–10 minutes, it worked and there were no longer 503 errors in the log.

@billsmithatg
Copy link

In my case canceling (ctrl+C) and executing the same command (minikube dashboard) on a new terminal, worked.
(I know this is not a solution but give it a try)

That worked for me.

@jpshackelford
Copy link

I had this same experience in macOS Big Sur 11.5.2 with docker desktop 3.3.3. In the same terminal session I installed minikube using default curl command copied from https://minikube.sigs.k8s.io/docs/start/:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
$ sudo install minikube-darwin-amd64 /usr/local/bin/minikube

I then ran:

$ minikube start

I then run:

$ minikube dashboard

Then the process appeared to hang at

Verifying proxy health ...

Then ctrl-c, open a new terminal:

$ minikube dashboard

And browser window opens as expected with dashboard showing.

@YunEr-Wang
Copy link

I met the same problem.

🔌  Enabling dashboard ...
    ▪ Using image kubernetesui/dashboard:v2.1.0
    ▪ Using image kubernetesui/metrics-scraper:v1.0.4
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...

Then it is stuck. Even if I stop it and execute it from another terminal. It didn't work

@nuzurie
Copy link

nuzurie commented Dec 26, 2021

Ping

@patcon
Copy link

patcon commented Jan 4, 2022

Nearly ditto what @jpshackelford said in #5815 (comment)

If I run eval $(minikube docker-env) in the new terminal window, it will also hang at Verifying proxy health ..., but seems to success otherwise.

I installed v1.24.0 on Darwin 11.6 via homebrew on Docker 20.10.8, then

minikube start --kubernetes-version=v1.22.3

@sglickman
Copy link

Also experiencing this. Running minikube dashboard results in the following output

🔌  Enabling dashboard ...
    ▪ Using image kubernetesui/dashboard:v2.3.1
    ▪ Using image kubernetesui/metrics-scraper:v1.0.7
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...

It hangs for a while, and then produces this final line of output:

❌  Exiting due to SVC_URL_TIMEOUT: http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL

If I run minikube dashboard again it launches almost immediately.

@VergeDX
Copy link

VergeDX commented Mar 4, 2022

I got Exiting due to SVC_URL_TIMEOUT: http://127.0.0.1:35673/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503 in last line, and minikube dashboard again still stuck there.

@sergeygalaxy
Copy link

sergeygalaxy commented Jul 2, 2022

Any solution please?

Update:

My workaround

  1. minikube ssh
  2. docker pull kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3

I have minikube version v1.26.0, If you need to figure out which exactly dashboard image tag you need go to minikube logs and find which image it struggles to pull. Seems in my case the issue was in pull timeout somewhere on kubelet level, so it didn't allow to complete image pulling and cause exception. Once image pulled in a local repository issue was resolved and minikube dashboard works now.

@git-zjx
Copy link

git-zjx commented Oct 14, 2022

Any solution please?

Oct 14 02:29:26 minikube kubelet[2553]: E1014 02:29:26.691624    2553 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://registry-1.docker.io/v2/\\\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:57302->10.222.76.40:53: i/o timeout\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-57d8d5b8b8-plhm9" podUID=ad405577-b393-4963-a7fa-5c8c0c1c244b
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713157    2553 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713227    2553 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713373    2553 kuberuntime_manager.go:919] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gkkzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kubernetes-dashboard-6f75b5c656-m8nxv_kubernetes-dashboard(76ff4f3a-5f3b-4bcd-8bda-7deecde3b7ed): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713412    2553 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://registry-1.docker.io/v2/\\\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout\"" pod="kubernetes-dashboard/kubernetes-dashboard-6f75b5c656-m8nxv" podUID=76ff4f3a-5f3b-4bcd-8bda-7deecde3b7ed
Oct 14 02:29:38 minikube kubelet[2553]: E1014 02:29:38.668006    2553 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-57d8d5b8b8-plhm9" podUID=ad405577-b393-4963-a7fa-5c8c0c1c244b

@git-zjx
Copy link

git-zjx commented Oct 14, 2022

Any solution please?

Oct 14 02:29:26 minikube kubelet[2553]: E1014 02:29:26.691624    2553 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://registry-1.docker.io/v2/\\\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:57302->10.222.76.40:53: i/o timeout\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-57d8d5b8b8-plhm9" podUID=ad405577-b393-4963-a7fa-5c8c0c1c244b
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713157    2553 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713227    2553 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry-1.docker.io/v2/\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713373    2553 kuberuntime_manager.go:919] container &Container{Name:kubernetes-dashboard,Image:docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Command:[],Args:[--namespace=kubernetes-dashboard --enable-skip-login --disable-settings-authorizer],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gkkzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 9090 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kubernetes-dashboard-6f75b5c656-m8nxv_kubernetes-dashboard(76ff4f3a-5f3b-4bcd-8bda-7deecde3b7ed): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout
Oct 14 02:29:36 minikube kubelet[2553]: E1014 02:29:36.713412    2553 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://registry-1.docker.io/v2/\\\": dial tcp: lookup registry-1.docker.io on 10.222.76.40:53: read udp 192.168.49.2:55291->10.222.76.40:53: i/o timeout\"" pod="kubernetes-dashboard/kubernetes-dashboard-6f75b5c656-m8nxv" podUID=76ff4f3a-5f3b-4bcd-8bda-7deecde3b7ed
Oct 14 02:29:38 minikube kubelet[2553]: E1014 02:29:38.668006    2553 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-57d8d5b8b8-plhm9" podUID=ad405577-b393-4963-a7fa-5c8c0c1c244b

Share my solution:

minikube ssh
sudo vi /etc/resolv.conf
nameserver 10.222.76.40
nameserver 114.114.114.114  #add this
options ndots:0

@fufu930
Copy link

fufu930 commented Mar 2, 2023

Running this command in a PowerShell /w privilege worked.
CMD did not work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/dashboard dashboard related issues co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests