Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm init error running command: Process exited with status 1 (make error debuggable!) #2493

Closed
cbenien opened this issue Jan 31, 2018 · 47 comments
Labels
co/kubeadm Issues relating to kubeadm co/virtualbox kind/bug Categorizes issue or PR as related to a bug.

Comments

@cbenien
Copy link

cbenien commented Jan 31, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Please provide the following details:

Environment: Windows 10 64bit v1709, VirtualBox 5.2.6

Minikube version (use minikube version): v0.25.0

  • OS (e.g. from /etc/os-release): Windows
  • VM Driver virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.25.1
  • Install tools:
  • Others:

What happened:

I'm running the following command:
minikube start --cpus=2 --memory=4096 --kubernetes-version=v1.9.2 --bootstrapper=kubeadm

And get this error

Starting local Kubernetes v1.9.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0131 15:08:47.591370    1704 start.go:276] Error starting cluster:  kubeadm init error running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1

What you expected to happen:
A running minikube cluster

How to reproduce it (as minimally and precisely as possible):
minikube start --cpus=2 --memory=4096 --kubernetes-version=v1.9.2 --bootstrapper=kubeadm

Output of minikube logs (if applicable): logs are empty

Anything else do we need to know:
This was already reported under #2018, but the issue was closed. The problem is still present though (at least on Windows, I did a quick check on Ubuntu 16.04 and it worked fine).

@F21
Copy link

F21 commented Jan 31, 2018

Also failing on Windows 10 1709 64-bit and VirtualBox 5.2.6 here, but for a different reason:

$ minikube start --kubernetes-version=v1.9.2 --bootstrapper=kubeadm
Starting local Kubernetes v1.9.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0201 10:01:27.044252    9948 start.go:281] Error restarting cluster:  restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

@r2d4 r2d4 added drivers/virtualbox/windows co/kubeadm Issues relating to kubeadm kind/bug Categorizes issue or PR as related to a bug. labels Mar 5, 2018
@asbjornu
Copy link

I just experienced this on macOS 10.12.6, minikube v0.25.0, kubectl version:

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}

@asbjornu
Copy link

Seems related to #2131.

@rohan47
Copy link

rohan47 commented Mar 28, 2018

Faced this issue on arch linux
kernel: 4.15.9-1-ARCH
minikube version: v0.25.2

$ minikube start --memory=3000 -b kubeadm --kubernetes-version
Starting local Kubernetes v1.9.1 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0328 21:54:51.600728    7358 start.go:276] Error starting cluster:  kubeadm init error running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

@pawankr
Copy link

pawankr commented Apr 12, 2018

Same issue on Windows 10

Starting cluster components...
E0328 21:54:51.600728    7358 start.go:276] Error starting cluster:  kubeadm init error running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

After rerunning now i got this

`

Starting cluster components...
E0412 18:04:43.797557   34012 start.go:281] Error restarting cluster:  restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
y

@asifmammadov
Copy link

Hi. I am also having the same issue when trying to install All-in-one-node Kubernetes cluster with minikube. It seems like an issue when starting the cluster. Below are the details:
OS: Windows 10
VM Driver: Oracle Virtualbox
Minikube version: v0.26.0
Install tools:
Others:

Starting cluster components...
E0415 19:48:14.362282   13916 start.go:276] Error starting cluster:  kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap  running command: : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap
 output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[certificates] Using the existing ca certificate and key.
                                                                                                        [WARNING Swap]: running with swap on is not supported. Please disable swap
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
                                                          [certificates] Using the existing apiserver certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [minikube] and IPs [192.168.99.100]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
        - Either there is no internet connection, or imagePullPolicy is set to "Never",
          so the kubelet cannot pull or find the following control plane images:
                - k8s.gcr.io/kube-apiserver-amd64:v1.10.0
                - k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
                - k8s.gcr.io/kube-scheduler-amd64:v1.10.0
                - k8s.gcr.io/etcd-amd64:3.1.12 (only if no external etcd endpoints are configured)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
                                         : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap
.: Process exited with status 1 

I am not sure if it is related or not but running kubectl version command returns

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: Service Unavailable

@jurezz
Copy link

jurezz commented Apr 16, 2018

Hi, getting exactly the same issue as described by @asifmammadov
Using the following configuration:
OS: Windows 10
VM Driver: Oracle Virtualbox v5.2.8
Minikube version: v0.26.0

@chbussler
Copy link

chbussler commented Apr 16, 2018

Getting the same error on
OS: Windows 7
VM Driver: Oracle Virtualbox v5.2.8
Minikube version: v0.26.0

Just found related issue that solved my problem: #2696

@landpy
Copy link

landpy commented Apr 17, 2018

Get the same issue too, the logs is as following:
OS: OS X 10.12.3
Minikube: 0.26.0
VM: VirsualBox

minikube start --registry-mirror=https://registry.docker-cn.com
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0417 18:23:34.065553 12866 start.go:281] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
E0417 18:23:40.334590 12866 util.go:151] Error uploading error message: : Post https://clouderrorreporting.googleapis.com/v1beta1/projects/k8s-minikube/events:report?key=AIzaSyACUwzG0dEPcl-eOgpDKnyKoUFgHdfoFuA: read tcp 192.168.3.2:55296->216.58.200.234:443: read: connection reset by peer

@Jimmer-Ball
Copy link

Me too, same issues on unhealthy

kubeadm: v1.10.0
minikube: v0.26.0
VM: VirtaulBox 5.2.6
O/S: Windows 10 Pro 1709 m16299.371

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.

@3miliomc
Copy link

I faced the same issue on my laptop windows 10 minikube

minikube delete

then

minikube start

solved my issue

@Jimmer-Ball
Copy link

Reverting to version 0.25.2 from 0.26 is a workaround for me on Windows.

@jurezz
Copy link

jurezz commented Apr 22, 2018

Yes works when downgrading minikube to 0.25.2, hope the issues will be fixed in 0.26 soon.

@asifmammadov
Copy link

I tried minikube delete and start and the same problem. However, downgrading it to version 0.25.2 works, thanks!

@linsheng9731
Copy link

@asifmammadov Thanks it works for me !

@songpr
Copy link

songpr commented Apr 25, 2018

Same issue
kubeadm: v1.10.0
minikube: v0.26.1
VM: hyperv
O/S: Windows 10 Pro

Workaround use minikube 0.25.2 work!

@luizanao
Copy link

luizanao commented May 2, 2018

Got same issue on:
minikube version: v0.26.1
Arch linux - 4.15.14-1-ARCH

You should explicit your --vm-driver,
in my case:

 minikube start --vm-driver=virtualbox

works fine!

@lnfnunes
Copy link

lnfnunes commented Jun 7, 2018

I followed the approach suggested by @3miliomc and it worked! 👍

  1. minikube delete
  2. minikube start

O/S: Windows 10 Pro
VM: hyper-v
Minikube: v0.27.0
Kubeadm: v1.10.0

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2018
@tstromberg tstromberg changed the title kubeadm bootstrapper fails on Windows / VirtualBox kubeadm init error running command: Process exited with status 1 (make error debuggable!) Sep 19, 2018
@tstromberg
Copy link
Contributor

This seems to be a general case of kubeadm failures, for which we don't yet provide any additional helpful error messages. If someone sees this again, please attach the output from "minikube ssh" running the following:

sudo journalctl -u kubelet
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks

as well as:

cat /var/lib/kubeadm.yaml

Thanks!

@tstromberg tstromberg removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 19, 2018
@ahopkins
Copy link

@gbraad

this might have been a restart of an existing instance, but at the time when moving from localkube to kubeadm

True. I have tried it many which ways. Start. Ctrl+C. Force shut down with virtual box.

if possible, delete the instance you used and try a clean deployment.

Yes ... I have done this. Unfortunately that is how I got into this mess. It had been working, but for some reason after about an hour it would no longer respond to requests. So, I would have to restart. I thought minikube delete and starting fresh would fix ... but it just made it so minikube start does not work at all.

It does work with --bootstrapper=localkube. BUT ... besides not being ideal since it will be deprecated, this is not workable because then kubectl hangs and does not respond.

I even went so far as to remove the minikube binaries, and ~/.minikube directory and install from scratch. Same problem.

@tstromberg

The command was simply minikube start. I have virtualbox 5.1.24 running on Ubuntu 17.10. minikube version: v0.28.2

╰─$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0921 12:32:29.702897    5220 start.go:305] Error restarting cluster:  restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
	minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]: 
y

At the Starting cluster components... line ... it hangs for about 10 minutes before the error message pops up.

╭─adam@thebrewery ~  
╰─$ minikube logs
-- Logs begin at Fri 2018-09-21 09:22:09 UTC, end at Fri 2018-09-21 09:33:06 UTC. --
Sep 21 09:22:24 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --allow-privileged has been deprecated, will be removed in a future version
Sep 21 09:22:24 minikube kubelet[2723]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:24 minikube kubelet[2723]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
Sep 21 09:22:24 minikube kubelet[2723]: I0921 09:22:24.687893    2723 feature_gate.go:226] feature gates: &{{} map[]}
Sep 21 09:22:24 minikube kubelet[2723]: F0921 09:22:24.688324    2723 server.go:218] unable to load client CA file /var/lib/localkube/certs/ca.crt: open /var/lib/localkube/certs/ca.crt: no such file or directory
Sep 21 09:22:24 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Sep 21 09:22:24 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
Sep 21 09:22:34 minikube systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Sep 21 09:22:34 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Sep 21 09:22:34 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Sep 21 09:22:34 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --allow-privileged has been deprecated, will be removed in a future version
Sep 21 09:22:34 minikube kubelet[2799]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 21 09:22:34 minikube kubelet[2799]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.798901    2799 feature_gate.go:226] feature gates: &{{} map[]}
Sep 21 09:22:34 minikube kubelet[2799]: W0921 09:22:34.806425    2799 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.806452    2799 server.go:376] Version: v1.10.0
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.806473    2799 feature_gate.go:226] feature gates: &{{} map[]}
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.806526    2799 plugins.go:89] No cloud provider specified.
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826238    2799 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826456    2799 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826471    2799 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826556    2799 container_manager_linux.go:266] Creating device plugin manager: true
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826586    2799 state_mem.go:36] [cpumanager] initializing new in-memory state store
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826782    2799 state_mem.go:87] [cpumanager] updated default cpuset: ""
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826792    2799 state_mem.go:95] [cpumanager] updated cpuset assignments: "map[]"
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826850    2799 kubelet.go:272] Adding pod path: /etc/kubernetes/manifests
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.826863    2799 kubelet.go:297] Watching apiserver
Sep 21 09:22:34 minikube kubelet[2799]: E0921 09:22:34.833885    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:34 minikube kubelet[2799]: E0921 09:22:34.834022    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:34 minikube kubelet[2799]: E0921 09:22:34.834133    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:34 minikube kubelet[2799]: W0921 09:22:34.840723    2799 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.840763    2799 kubelet.go:556] Hairpin mode set to "hairpin-veth"
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.845323    2799 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.845353    2799 client.go:104] Start docker client with request timeout=2m0s
Sep 21 09:22:34 minikube kubelet[2799]: W0921 09:22:34.852034    2799 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.863751    2799 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.870266    2799 docker_service.go:249] Docker Info: &{ID:QEDW:DQB5:MWND:CNHW:MFHK:GFVI:6RP7:ODNI:B2UL:WLHE:2ARX:GENQ Containers:13 ContainersRunning:12 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2018-09-21T09:22:34.865689731Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0 OperatingSystem:Buildroot 2018.05 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4201fa230 NCPU:2 MemTotal:2087550976 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=virtualbox] ExperimentalBuild:false ServerVersion:17.12.1-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9b55aab90508bd389d7654c4baf173a981477d55 Expected:9b55aab90508bd389d7654c4baf173a981477d55} RuncCommit:{ID:9f9c96235cc97674e935002fc3d78361b696a69e Expected:9f9c96235cc97674e935002fc3d78361b696a69e} InitCommit:{ID:N/A Expected:} SecurityOptions:[name=seccomp,profile=default]}
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.871436    2799 docker_service.go:262] Setting cgroupDriver to cgroupfs
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.887952    2799 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.888764    2799 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.12.1-ce, apiVersion: 1.35.0
Sep 21 09:22:34 minikube kubelet[2799]: W0921 09:22:34.888881    2799 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.889004    2799 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Sep 21 09:22:34 minikube kubelet[2799]: E0921 09:22:34.890635    2799 kubelet.go:1277] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.890944    2799 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.890969    2799 status_manager.go:140] Starting to sync pod status with apiserver
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.890995    2799 kubelet.go:1777] Starting kubelet main sync loop.
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.891006    2799 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.891078    2799 server.go:129] Starting to listen on 0.0.0.0:10250
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.891408    2799 server.go:299] Adding debug handlers to kubelet server.
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.892066    2799 volume_manager.go:247] Starting Kubelet Volume Manager
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.898308    2799 server.go:944] Started kubelet
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.898379    2799 desired_state_of_world_populator.go:129] Desired state populator starts to run
Sep 21 09:22:34 minikube kubelet[2799]: E0921 09:22:34.898661    2799 event.go:209] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: getsockopt: connection refused' (may retry after sleeping)
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.991296    2799 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.992383    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:34 minikube kubelet[2799]: I0921 09:22:34.993801    2799 kubelet_node_status.go:82] Attempting to register node minikube
Sep 21 09:22:34 minikube kubelet[2799]: E0921 09:22:34.994041    2799 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:35 minikube kubelet[2799]: I0921 09:22:35.191574    2799 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Sep 21 09:22:35 minikube kubelet[2799]: I0921 09:22:35.194266    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:35 minikube kubelet[2799]: I0921 09:22:35.195920    2799 kubelet_node_status.go:82] Attempting to register node minikube
Sep 21 09:22:35 minikube kubelet[2799]: E0921 09:22:35.196170    2799 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:35 minikube kubelet[2799]: I0921 09:22:35.591911    2799 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Sep 21 09:22:35 minikube kubelet[2799]: I0921 09:22:35.596611    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:35 minikube kubelet[2799]: I0921 09:22:35.600030    2799 kubelet_node_status.go:82] Attempting to register node minikube
Sep 21 09:22:35 minikube kubelet[2799]: E0921 09:22:35.600461    2799 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:35 minikube kubelet[2799]: E0921 09:22:35.835255    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:35 minikube kubelet[2799]: E0921 09:22:35.840974    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:35 minikube kubelet[2799]: E0921 09:22:35.845077    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.392072    2799 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.400801    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.402706    2799 kubelet_node_status.go:82] Attempting to register node minikube
Sep 21 09:22:36 minikube kubelet[2799]: E0921 09:22:36.402928    2799 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:36 minikube kubelet[2799]: E0921 09:22:36.835784    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:36 minikube kubelet[2799]: E0921 09:22:36.841510    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:36 minikube kubelet[2799]: E0921 09:22:36.845567    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.984038    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.985580    2799 cpu_manager.go:155] [cpumanager] starting with none policy
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.985688    2799 cpu_manager.go:156] [cpumanager] reconciling every 10s
Sep 21 09:22:36 minikube kubelet[2799]: I0921 09:22:36.985724    2799 policy_none.go:42] [cpumanager] none policy: Start
Sep 21 09:22:36 minikube kubelet[2799]: Starting Device Plugin manager
Sep 21 09:22:36 minikube kubelet[2799]: E0921 09:22:36.998801    2799 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "minikube" not found
Sep 21 09:22:37 minikube kubelet[2799]: E0921 09:22:37.837958    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:37 minikube kubelet[2799]: E0921 09:22:37.841986    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:37 minikube kubelet[2799]: E0921 09:22:37.845934    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:37 minikube kubelet[2799]: I0921 09:22:37.992656    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.003287    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.006348    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.007200    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/b8f756ea85895bda697065768b18a93a-etcd-data") pod "etcd-minikube" (UID: "b8f756ea85895bda697065768b18a93a")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.007310    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/b8f756ea85895bda697065768b18a93a-etcd-certs") pod "etcd-minikube" (UID: "b8f756ea85895bda697065768b18a93a")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.007466    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.010171    2799 kubelet_node_status.go:82] Attempting to register node minikube
Sep 21 09:22:38 minikube kubelet[2799]: E0921 09:22:38.010615    2799 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.017384    2799 status_manager.go:461] Failed to get status for pod "etcd-minikube_kube-system(b8f756ea85895bda697065768b18a93a)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/etcd-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.025061    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.025960    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.030681    2799 status_manager.go:461] Failed to get status for pod "kube-apiserver-minikube_kube-system(f310a64a00b8dfb921ddaee9f1867742)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.031143    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.031843    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.036440    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.037259    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.045039    2799 status_manager.go:461] Failed to get status for pod "kube-controller-manager-minikube_kube-system(190578c14f9dbb22cf9d402d93b045c3)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.051273    2799 status_manager.go:461] Failed to get status for pod "kube-scheduler-minikube_kube-system(31cf0ccbee286239d451edb6fb511513)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.053028    2799 pod_container_deletor.go:77] Container "ff6a3f4cfc33a53e1b7272a145d47602d7b8ec95cd357c58b112c2aa5c7f03ad" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.053057    2799 pod_container_deletor.go:77] Container "68921dfde66a2490cce602258270a3f326b9244a1da86e8c3bfb804427e12aea" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.053093    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.053712    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.054831    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.057108    2799 status_manager.go:461] Failed to get status for pod "kube-addon-manager-minikube_kube-system(3afaf06535cc3b85be93c31632b765da)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-addon-manager-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.061222    2799 pod_container_deletor.go:77] Container "32f2dc81f99d6b354f2ef65743e64eb4b0ceca0323410d155dc24b486a81a427" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.061318    2799 pod_container_deletor.go:77] Container "750d82e2b7b5ad35832a0a17ce3b211f070d61f986188889c616667383612bec" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.061382    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.062775    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.063594    2799 pod_container_deletor.go:77] Container "9d39e2134b26d8bda5a1a272ba6da707fdbf9d51c06bad73149743c10ebf47bf" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.063618    2799 pod_container_deletor.go:77] Container "dec383721ca8ba10ad895ea757be43662e44bcff1a1b5b3684d818284ba4eb15" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.063666    2799 pod_container_deletor.go:77] Container "c83b4bae67eb3aa901f737f4fcfd77ab10b1ac99dc792e28b3cd3071e33bebe0" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: W0921 09:22:38.063674    2799 pod_container_deletor.go:77] Container "d8f43c85cf9b5aacea12546e771795a39dec43f899ddca684727648b9eeed6e1" not found in pod's containers
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107435    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f310a64a00b8dfb921ddaee9f1867742-k8s-certs") pod "kube-apiserver-minikube" (UID: "f310a64a00b8dfb921ddaee9f1867742")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107474    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f310a64a00b8dfb921ddaee9f1867742-ca-certs") pod "kube-apiserver-minikube" (UID: "f310a64a00b8dfb921ddaee9f1867742")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107491    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/190578c14f9dbb22cf9d402d93b045c3-ca-certs") pod "kube-controller-manager-minikube" (UID: "190578c14f9dbb22cf9d402d93b045c3")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107507    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/3afaf06535cc3b85be93c31632b765da-kubeconfig") pod "kube-addon-manager-minikube" (UID: "3afaf06535cc3b85be93c31632b765da")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107534    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/190578c14f9dbb22cf9d402d93b045c3-kubeconfig") pod "kube-controller-manager-minikube" (UID: "190578c14f9dbb22cf9d402d93b045c3")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107550    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/31cf0ccbee286239d451edb6fb511513-kubeconfig") pod "kube-scheduler-minikube" (UID: "31cf0ccbee286239d451edb6fb511513")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107566    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/3afaf06535cc3b85be93c31632b765da-addons") pod "kube-addon-manager-minikube" (UID: "3afaf06535cc3b85be93c31632b765da")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.107582    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/190578c14f9dbb22cf9d402d93b045c3-k8s-certs") pod "kube-controller-manager-minikube" (UID: "190578c14f9dbb22cf9d402d93b045c3")
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.579507    2799 kuberuntime_manager.go:757] checking backoff for container "kube-scheduler" in pod "kube-scheduler-minikube_kube-system(31cf0ccbee286239d451edb6fb511513)"
Sep 21 09:22:38 minikube kubelet[2799]: I0921 09:22:38.679718    2799 kuberuntime_manager.go:757] checking backoff for container "kube-addon-manager" in pod "kube-addon-manager-minikube_kube-system(3afaf06535cc3b85be93c31632b765da)"
Sep 21 09:22:38 minikube kubelet[2799]: E0921 09:22:38.838757    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: E0921 09:22:38.842620    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:38 minikube kubelet[2799]: E0921 09:22:38.846237    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: I0921 09:22:39.361563    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:39 minikube kubelet[2799]: W0921 09:22:39.364639    2799 status_manager.go:461] Failed to get status for pod "kube-scheduler-minikube_kube-system(31cf0ccbee286239d451edb6fb511513)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: I0921 09:22:39.369002    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:39 minikube kubelet[2799]: W0921 09:22:39.392358    2799 status_manager.go:461] Failed to get status for pod "kube-apiserver-minikube_kube-system(f310a64a00b8dfb921ddaee9f1867742)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: I0921 09:22:39.395724    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:39 minikube kubelet[2799]: W0921 09:22:39.398303    2799 status_manager.go:461] Failed to get status for pod "etcd-minikube_kube-system(b8f756ea85895bda697065768b18a93a)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/etcd-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: I0921 09:22:39.400653    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:39 minikube kubelet[2799]: W0921 09:22:39.401953    2799 status_manager.go:461] Failed to get status for pod "kube-controller-manager-minikube_kube-system(190578c14f9dbb22cf9d402d93b045c3)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: E0921 09:22:39.839260    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: E0921 09:22:39.842957    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:39 minikube kubelet[2799]: E0921 09:22:39.846610    2799 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 21 09:22:40 minikube kubelet[2799]: I0921 09:22:40.412598    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:40 minikube kubelet[2799]: I0921 09:22:40.414422    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:40 minikube kubelet[2799]: I0921 09:22:40.416987    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:40 minikube kubelet[2799]: I0921 09:22:40.417403    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:40 minikube kubelet[2799]: I0921 09:22:40.417571    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:41 minikube kubelet[2799]: I0921 09:22:41.211268    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:41 minikube kubelet[2799]: I0921 09:22:41.212499    2799 kubelet_node_status.go:82] Attempting to register node minikube
Sep 21 09:22:41 minikube kubelet[2799]: I0921 09:22:41.417063    2799 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Sep 21 09:22:43 minikube kubelet[2799]: W0921 09:22:43.903361    2799 kubelet.go:1602] Deleting mirror pod "etcd-minikube_kube-system(975c313f-bc57-11e8-9b63-080027824c94)" because it is outdated
Sep 21 09:22:43 minikube kubelet[2799]: W0921 09:22:43.904647    2799 kubelet.go:1602] Deleting mirror pod "kube-apiserver-minikube_kube-system(9677af06-bc57-11e8-9b63-080027824c94)" because it is outdated
Sep 21 09:22:43 minikube kubelet[2799]: W0921 09:22:43.904853    2799 kubelet.go:1602] Deleting mirror pod "kube-controller-manager-minikube_kube-system(9677e29f-bc57-11e8-9b63-080027824c94)" because it is outdated
Sep 21 09:22:44 minikube kubelet[2799]: I0921 09:22:44.044004    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f599affd-bc4d-11e8-8617-080027824c94-tmp") pod "storage-provisioner" (UID: "f599affd-bc4d-11e8-8617-080027824c94")
Sep 21 09:22:44 minikube kubelet[2799]: I0921 09:22:44.044044    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-xjkss" (UniqueName: "kubernetes.io/secret/f59107f4-bc4d-11e8-8617-080027824c94-default-token-xjkss") pod "kubernetes-dashboard-5498ccf677-g4hqx" (UID: "f59107f4-bc4d-11e8-8617-080027824c94")
Sep 21 09:22:44 minikube kubelet[2799]: I0921 09:22:44.044063    2799 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cd9hj" (UniqueName: "kubernetes.io/secret/f599affd-bc4d-11e8-8617-080027824c94-storage-provisioner-token-cd9hj") pod "storage-provisioner" (UID: "f599affd-bc4d-11e8-8617-080027824c94")
Sep 21 09:22:44 minikube kubelet[2799]: I0921 09:22:44.044074    2799 reconciler.go:154] Reconciler: start to sync state
Sep 21 09:22:48 minikube kubelet[2799]: I0921 09:22:48.021406    2799 kubelet_node_status.go:127] Node minikube was previously registered
Sep 21 09:22:48 minikube kubelet[2799]: I0921 09:22:48.022505    2799 kubelet_node_status.go:85] Successfully registered node minikube
Sep 21 09:23:44 minikube kubelet[2799]: I0921 09:23:44.384997    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:23:44 minikube kubelet[2799]: I0921 09:23:44.385098    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:24:15 minikube kubelet[2799]: I0921 09:24:15.719950    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:24:15 minikube kubelet[2799]: I0921 09:24:15.721176    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:24:15 minikube kubelet[2799]: I0921 09:24:15.721301    2799 kuberuntime_manager.go:767] Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:24:15 minikube kubelet[2799]: E0921 09:24:15.721384    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:24:30 minikube kubelet[2799]: I0921 09:24:30.191949    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:24:30 minikube kubelet[2799]: I0921 09:24:30.192118    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:24:32 minikube kubelet[2799]: I0921 09:24:32.921788    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:24:32 minikube kubelet[2799]: I0921 09:24:32.921945    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:01 minikube kubelet[2799]: I0921 09:25:01.197378    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:01 minikube kubelet[2799]: I0921 09:25:01.197550    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:01 minikube kubelet[2799]: I0921 09:25:01.197678    2799 kuberuntime_manager.go:767] Back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:01 minikube kubelet[2799]: E0921 09:25:01.197717    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:04 minikube kubelet[2799]: I0921 09:25:04.222087    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:04 minikube kubelet[2799]: I0921 09:25:04.222208    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:04 minikube kubelet[2799]: I0921 09:25:04.222301    2799 kuberuntime_manager.go:767] Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:04 minikube kubelet[2799]: E0921 09:25:04.222327    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:05 minikube kubelet[2799]: I0921 09:25:05.618625    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:05 minikube kubelet[2799]: I0921 09:25:05.619360    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:05 minikube kubelet[2799]: I0921 09:25:05.619515    2799 kuberuntime_manager.go:767] Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:05 minikube kubelet[2799]: E0921 09:25:05.619554    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:12 minikube kubelet[2799]: I0921 09:25:12.192519    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:12 minikube kubelet[2799]: I0921 09:25:12.192710    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:12 minikube kubelet[2799]: I0921 09:25:12.192815    2799 kuberuntime_manager.go:767] Back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:12 minikube kubelet[2799]: E0921 09:25:12.192847    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:18 minikube kubelet[2799]: I0921 09:25:18.195201    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:18 minikube kubelet[2799]: I0921 09:25:18.195399    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:23 minikube kubelet[2799]: I0921 09:25:23.192324    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:23 minikube kubelet[2799]: I0921 09:25:23.192599    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:49 minikube kubelet[2799]: I0921 09:25:49.702451    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:49 minikube kubelet[2799]: I0921 09:25:49.702630    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:49 minikube kubelet[2799]: I0921 09:25:49.702755    2799 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:49 minikube kubelet[2799]: E0921 09:25:49.702795    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:54 minikube kubelet[2799]: I0921 09:25:54.776233    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:54 minikube kubelet[2799]: I0921 09:25:54.776404    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:54 minikube kubelet[2799]: I0921 09:25:54.776509    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:54 minikube kubelet[2799]: E0921 09:25:54.776548    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:55 minikube kubelet[2799]: I0921 09:25:55.618761    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:25:55 minikube kubelet[2799]: I0921 09:25:55.621130    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:25:55 minikube kubelet[2799]: I0921 09:25:55.621449    2799 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:25:55 minikube kubelet[2799]: E0921 09:25:55.621596    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:07 minikube kubelet[2799]: I0921 09:26:07.194117    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:07 minikube kubelet[2799]: I0921 09:26:07.197483    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:07 minikube kubelet[2799]: I0921 09:26:07.197828    2799 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:26:07 minikube kubelet[2799]: E0921 09:26:07.197947    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:08 minikube kubelet[2799]: I0921 09:26:08.193029    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:08 minikube kubelet[2799]: I0921 09:26:08.195239    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:08 minikube kubelet[2799]: I0921 09:26:08.196171    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:26:08 minikube kubelet[2799]: E0921 09:26:08.196587    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:20 minikube kubelet[2799]: I0921 09:26:20.192016    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:20 minikube kubelet[2799]: I0921 09:26:20.193262    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:23 minikube kubelet[2799]: I0921 09:26:23.194292    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:23 minikube kubelet[2799]: I0921 09:26:23.195490    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:23 minikube kubelet[2799]: I0921 09:26:23.195597    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:26:23 minikube kubelet[2799]: E0921 09:26:23.195674    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:36 minikube kubelet[2799]: I0921 09:26:36.192098    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:36 minikube kubelet[2799]: I0921 09:26:36.192190    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:51 minikube kubelet[2799]: I0921 09:26:51.351045    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:51 minikube kubelet[2799]: I0921 09:26:51.351240    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:51 minikube kubelet[2799]: I0921 09:26:51.351392    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:26:51 minikube kubelet[2799]: E0921 09:26:51.351432    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:55 minikube kubelet[2799]: I0921 09:26:55.619376    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:26:55 minikube kubelet[2799]: I0921 09:26:55.620601    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:26:55 minikube kubelet[2799]: I0921 09:26:55.620698    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:26:55 minikube kubelet[2799]: E0921 09:26:55.620749    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:07 minikube kubelet[2799]: I0921 09:27:07.578228    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:07 minikube kubelet[2799]: I0921 09:27:07.578406    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:07 minikube kubelet[2799]: I0921 09:27:07.578510    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:07 minikube kubelet[2799]: E0921 09:27:07.578556    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:08 minikube kubelet[2799]: I0921 09:27:08.192173    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:08 minikube kubelet[2799]: I0921 09:27:08.193351    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:08 minikube kubelet[2799]: I0921 09:27:08.193489    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:08 minikube kubelet[2799]: E0921 09:27:08.193527    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:19 minikube kubelet[2799]: I0921 09:27:19.194879    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:19 minikube kubelet[2799]: I0921 09:27:19.196048    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:19 minikube kubelet[2799]: I0921 09:27:19.196307    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:19 minikube kubelet[2799]: E0921 09:27:19.196365    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:21 minikube kubelet[2799]: I0921 09:27:21.200659    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:21 minikube kubelet[2799]: I0921 09:27:21.200787    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:21 minikube kubelet[2799]: I0921 09:27:21.200863    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:21 minikube kubelet[2799]: E0921 09:27:21.200888    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:30 minikube kubelet[2799]: I0921 09:27:30.192181    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:30 minikube kubelet[2799]: I0921 09:27:30.192316    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:30 minikube kubelet[2799]: I0921 09:27:30.192420    2799 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:30 minikube kubelet[2799]: E0921 09:27:30.192444    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:35 minikube kubelet[2799]: I0921 09:27:35.199016    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:35 minikube kubelet[2799]: I0921 09:27:35.199207    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:35 minikube kubelet[2799]: I0921 09:27:35.199326    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:35 minikube kubelet[2799]: E0921 09:27:35.199364    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:42 minikube kubelet[2799]: I0921 09:27:42.192378    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:42 minikube kubelet[2799]: I0921 09:27:42.193595    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:51 minikube kubelet[2799]: I0921 09:27:51.196861    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:27:51 minikube kubelet[2799]: I0921 09:27:51.197884    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:27:51 minikube kubelet[2799]: I0921 09:27:51.197937    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:27:51 minikube kubelet[2799]: E0921 09:27:51.197957    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:03 minikube kubelet[2799]: I0921 09:28:03.192693    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:03 minikube kubelet[2799]: I0921 09:28:03.192773    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:03 minikube kubelet[2799]: I0921 09:28:03.192819    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:03 minikube kubelet[2799]: E0921 09:28:03.192837    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:13 minikube kubelet[2799]: I0921 09:28:13.234623    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:13 minikube kubelet[2799]: I0921 09:28:13.234881    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:13 minikube kubelet[2799]: I0921 09:28:13.235018    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:13 minikube kubelet[2799]: E0921 09:28:13.235056    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:15 minikube kubelet[2799]: I0921 09:28:15.622096    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:15 minikube kubelet[2799]: I0921 09:28:15.623159    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:15 minikube kubelet[2799]: I0921 09:28:15.623255    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:15 minikube kubelet[2799]: E0921 09:28:15.623278    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:17 minikube kubelet[2799]: I0921 09:28:17.192704    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:17 minikube kubelet[2799]: I0921 09:28:17.193358    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:17 minikube kubelet[2799]: I0921 09:28:17.193466    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:17 minikube kubelet[2799]: E0921 09:28:17.193505    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:27 minikube kubelet[2799]: I0921 09:28:27.192676    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:27 minikube kubelet[2799]: I0921 09:28:27.192777    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:27 minikube kubelet[2799]: I0921 09:28:27.192861    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:27 minikube kubelet[2799]: E0921 09:28:27.192882    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:32 minikube kubelet[2799]: I0921 09:28:32.192208    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:32 minikube kubelet[2799]: I0921 09:28:32.192291    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:41 minikube kubelet[2799]: I0921 09:28:41.194632    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:41 minikube kubelet[2799]: I0921 09:28:41.195270    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:41 minikube kubelet[2799]: I0921 09:28:41.195331    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:41 minikube kubelet[2799]: E0921 09:28:41.195348    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:53 minikube kubelet[2799]: I0921 09:28:53.193148    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:28:53 minikube kubelet[2799]: I0921 09:28:53.196394    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:28:53 minikube kubelet[2799]: I0921 09:28:53.196845    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:28:53 minikube kubelet[2799]: E0921 09:28:53.197004    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:03 minikube kubelet[2799]: I0921 09:29:03.724335    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:03 minikube kubelet[2799]: I0921 09:29:03.724416    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:03 minikube kubelet[2799]: I0921 09:29:03.724483    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:03 minikube kubelet[2799]: E0921 09:29:03.724502    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:07 minikube kubelet[2799]: I0921 09:29:07.193083    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:07 minikube kubelet[2799]: I0921 09:29:07.194928    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:07 minikube kubelet[2799]: I0921 09:29:07.195109    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:07 minikube kubelet[2799]: E0921 09:29:07.195160    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:15 minikube kubelet[2799]: I0921 09:29:15.197544    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:15 minikube kubelet[2799]: I0921 09:29:15.197852    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:15 minikube kubelet[2799]: I0921 09:29:15.197998    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:15 minikube kubelet[2799]: E0921 09:29:15.198057    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:20 minikube kubelet[2799]: I0921 09:29:20.191652    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:20 minikube kubelet[2799]: I0921 09:29:20.192769    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:20 minikube kubelet[2799]: I0921 09:29:20.192864    2799 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:20 minikube kubelet[2799]: E0921 09:29:20.192920    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:30 minikube kubelet[2799]: I0921 09:29:30.193032    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:30 minikube kubelet[2799]: I0921 09:29:30.193141    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:30 minikube kubelet[2799]: I0921 09:29:30.193208    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:30 minikube kubelet[2799]: E0921 09:29:30.193235    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:33 minikube kubelet[2799]: I0921 09:29:33.193817    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:33 minikube kubelet[2799]: I0921 09:29:33.194530    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:45 minikube kubelet[2799]: I0921 09:29:45.192406    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:45 minikube kubelet[2799]: I0921 09:29:45.193600    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:45 minikube kubelet[2799]: I0921 09:29:45.193758    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:45 minikube kubelet[2799]: E0921 09:29:45.193811    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:57 minikube kubelet[2799]: I0921 09:29:57.194866    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:29:57 minikube kubelet[2799]: I0921 09:29:57.195917    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:29:57 minikube kubelet[2799]: I0921 09:29:57.196007    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:29:57 minikube kubelet[2799]: E0921 09:29:57.196035    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:04 minikube kubelet[2799]: I0921 09:30:04.301473    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:04 minikube kubelet[2799]: I0921 09:30:04.301571    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:04 minikube kubelet[2799]: I0921 09:30:04.301639    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:04 minikube kubelet[2799]: E0921 09:30:04.301657    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:05 minikube kubelet[2799]: I0921 09:30:05.619590    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:05 minikube kubelet[2799]: I0921 09:30:05.620714    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:05 minikube kubelet[2799]: I0921 09:30:05.620897    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:05 minikube kubelet[2799]: E0921 09:30:05.620946    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:08 minikube kubelet[2799]: I0921 09:30:08.194934    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:08 minikube kubelet[2799]: I0921 09:30:08.195986    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:08 minikube kubelet[2799]: I0921 09:30:08.196105    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:08 minikube kubelet[2799]: E0921 09:30:08.196154    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:20 minikube kubelet[2799]: I0921 09:30:20.192300    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:20 minikube kubelet[2799]: I0921 09:30:20.193392    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:20 minikube kubelet[2799]: I0921 09:30:20.193463    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:20 minikube kubelet[2799]: E0921 09:30:20.193481    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:22 minikube kubelet[2799]: I0921 09:30:22.192144    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:22 minikube kubelet[2799]: I0921 09:30:22.192292    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:22 minikube kubelet[2799]: I0921 09:30:22.192397    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:22 minikube kubelet[2799]: E0921 09:30:22.192442    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:32 minikube kubelet[2799]: I0921 09:30:32.192566    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:32 minikube kubelet[2799]: I0921 09:30:32.193796    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:32 minikube kubelet[2799]: I0921 09:30:32.193958    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:32 minikube kubelet[2799]: E0921 09:30:32.194000    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:33 minikube kubelet[2799]: I0921 09:30:33.192406    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:33 minikube kubelet[2799]: I0921 09:30:33.193571    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:33 minikube kubelet[2799]: I0921 09:30:33.193694    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:33 minikube kubelet[2799]: E0921 09:30:33.193734    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:47 minikube kubelet[2799]: I0921 09:30:47.193392    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:47 minikube kubelet[2799]: I0921 09:30:47.193502    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:47 minikube kubelet[2799]: I0921 09:30:47.193564    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:47 minikube kubelet[2799]: E0921 09:30:47.193581    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:47 minikube kubelet[2799]: I0921 09:30:47.196399    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:30:47 minikube kubelet[2799]: I0921 09:30:47.196469    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:30:47 minikube kubelet[2799]: I0921 09:30:47.196503    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:30:47 minikube kubelet[2799]: E0921 09:30:47.196517    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:01 minikube kubelet[2799]: I0921 09:31:01.192778    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:01 minikube kubelet[2799]: I0921 09:31:01.192861    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:01 minikube kubelet[2799]: I0921 09:31:01.192949    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:01 minikube kubelet[2799]: E0921 09:31:01.192980    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:01 minikube kubelet[2799]: I0921 09:31:01.192778    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:01 minikube kubelet[2799]: I0921 09:31:01.196036    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:01 minikube kubelet[2799]: I0921 09:31:01.196120    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:01 minikube kubelet[2799]: E0921 09:31:01.196146    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:13 minikube kubelet[2799]: I0921 09:31:13.192325    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:13 minikube kubelet[2799]: I0921 09:31:13.192447    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:13 minikube kubelet[2799]: I0921 09:31:13.192497    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:13 minikube kubelet[2799]: E0921 09:31:13.192514    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:17 minikube kubelet[2799]: I0921 09:31:17.193517    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:17 minikube kubelet[2799]: I0921 09:31:17.196941    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:17 minikube kubelet[2799]: I0921 09:31:17.197182    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:17 minikube kubelet[2799]: E0921 09:31:17.197298    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:26 minikube kubelet[2799]: I0921 09:31:26.191800    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:26 minikube kubelet[2799]: I0921 09:31:26.191904    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:26 minikube kubelet[2799]: I0921 09:31:26.191969    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:26 minikube kubelet[2799]: E0921 09:31:26.191988    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:32 minikube kubelet[2799]: I0921 09:31:32.192100    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:32 minikube kubelet[2799]: I0921 09:31:32.195314    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:32 minikube kubelet[2799]: I0921 09:31:32.195576    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:32 minikube kubelet[2799]: E0921 09:31:32.195724    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:39 minikube kubelet[2799]: I0921 09:31:39.192375    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:39 minikube kubelet[2799]: I0921 09:31:39.195005    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:39 minikube kubelet[2799]: I0921 09:31:39.195225    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:39 minikube kubelet[2799]: E0921 09:31:39.195744    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:43 minikube kubelet[2799]: I0921 09:31:43.195050    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:43 minikube kubelet[2799]: I0921 09:31:43.198172    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:43 minikube kubelet[2799]: I0921 09:31:43.198415    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:43 minikube kubelet[2799]: E0921 09:31:43.198536    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:54 minikube kubelet[2799]: I0921 09:31:54.193305    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:54 minikube kubelet[2799]: I0921 09:31:54.194488    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:56 minikube kubelet[2799]: I0921 09:31:56.192665    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:31:56 minikube kubelet[2799]: I0921 09:31:56.192740    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:31:56 minikube kubelet[2799]: I0921 09:31:56.192808    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:31:56 minikube kubelet[2799]: E0921 09:31:56.192826    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:10 minikube kubelet[2799]: I0921 09:32:10.191992    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:10 minikube kubelet[2799]: I0921 09:32:10.192075    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:10 minikube kubelet[2799]: I0921 09:32:10.192144    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:32:10 minikube kubelet[2799]: E0921 09:32:10.192162    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:22 minikube kubelet[2799]: I0921 09:32:22.192267    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:22 minikube kubelet[2799]: I0921 09:32:22.193332    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:22 minikube kubelet[2799]: I0921 09:32:22.193470    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:32:22 minikube kubelet[2799]: E0921 09:32:22.193509    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:25 minikube kubelet[2799]: I0921 09:32:25.711776    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:25 minikube kubelet[2799]: I0921 09:32:25.711993    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:25 minikube kubelet[2799]: I0921 09:32:25.712176    2799 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:32:25 minikube kubelet[2799]: E0921 09:32:25.712217    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:34 minikube kubelet[2799]: I0921 09:32:34.192255    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:34 minikube kubelet[2799]: I0921 09:32:34.192502    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:34 minikube kubelet[2799]: I0921 09:32:34.192666    2799 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)
Sep 21 09:32:34 minikube kubelet[2799]: E0921 09:32:34.192705    2799 pod_workers.go:186] Error syncing pod f59107f4-bc4d-11e8-8617-080027824c94 ("kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:39 minikube kubelet[2799]: I0921 09:32:39.192355    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:39 minikube kubelet[2799]: I0921 09:32:39.192521    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:39 minikube kubelet[2799]: I0921 09:32:39.192571    2799 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:32:39 minikube kubelet[2799]: E0921 09:32:39.192589    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:47 minikube kubelet[2799]: I0921 09:32:47.192496    2799 kuberuntime_manager.go:513] Container {Name:kubernetes-dashboard Image:k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-xjkss ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:47 minikube kubelet[2799]: I0921 09:32:47.193464    2799 kuberuntime_manager.go:757] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-5498ccf677-g4hqx_kube-system(f59107f4-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:54 minikube kubelet[2799]: I0921 09:32:54.195401    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:32:54 minikube kubelet[2799]: I0921 09:32:54.196335    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:32:54 minikube kubelet[2799]: I0921 09:32:54.196404    2799 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:32:54 minikube kubelet[2799]: E0921 09:32:54.196422    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:33:06 minikube kubelet[2799]: I0921 09:33:06.197908    2799 kuberuntime_manager.go:513] Container {Name:storage-provisioner Image:gcr.io/k8s-minikube/storage-provisioner:v1.8.1 Command:[/storage-provisioner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:tmp ReadOnly:false MountPath:/tmp SubPath: MountPropagation:<nil>} {Name:storage-provisioner-token-cd9hj ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Sep 21 09:33:06 minikube kubelet[2799]: I0921 09:33:06.198049    2799 kuberuntime_manager.go:757] checking backoff for container "storage-provisioner" in pod "storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"
Sep 21 09:33:06 minikube kubelet[2799]: I0921 09:33:06.198147    2799 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)
Sep 21 09:33:06 minikube kubelet[2799]: E0921 09:33:06.198186    2799 pod_workers.go:186] Error syncing pod f599affd-bc4d-11e8-8617-080027824c94 ("storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f599affd-bc4d-11e8-8617-080027824c94)"

@gbraad
Copy link
Contributor

gbraad commented Sep 21, 2018

I even went so far as to remove the minikube binaries, and ~/.minikube directory and install from scratch. Same problem.

This concludes that this is a real problem and not related to old crud left behind.

@gqcn
Copy link

gqcn commented Sep 28, 2018

sadly, I have the same issue and not know how to get through this...

@ahopkins
Copy link

@John-cn Sadly, I have not progressed any further 😖

@gavinB-orange
Copy link

gavinB-orange commented Sep 28, 2018

Same issue using Ubuntu 18.04.1 LTS and kvm2 driver

0.25.2 works fine, 0.29 does not.

@kkimdev
Copy link

kkimdev commented Sep 29, 2018

Windows 10 WSL. Confirmed 0.25.2 works but 0.29 doesn't on my machine.

@eschultz-magix
Copy link

Unfortunately, same issue here using Windows 10, VirtualBox driver, Minikube 0.30.0

@raghur
Copy link

raghur commented Oct 17, 2018

Chiming in again - for v0.30

C:\kube\client\bin>minikube -p 0.30 start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1017 13:03:53.393272    5160 start.go:302] Error restarting cluster:  restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition

Env - win8.1 x64, virtualbox 5.2.12

Only 0.25.2 works. At this stage I'm ready to give up on Minikube and have stopped recommending minikube to associates :(

@cmbernard333
Copy link

cmbernard333 commented Oct 22, 2018

I am also seeing this issue.

Ubuntu 18.10 64bit
kernel: 4.18
Minikube 0.29

christian@robo-beaver ~ $ minikube start --vm-driver=kvm2 --cpus=2 --memory=4096 --disk-size=45GB --extra-config=apiserver.Authorization.Mode=RBAC
There is a newer version of minikube available (v0.30.0).  Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.30.0

To disable this notification, run the following:
minikube config set WantUpdateNotification false
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 170.18 MB / 170.18 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1022 12:13:27.600279    9609 start.go:297] Error starting cluster:  kubeadm init error 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
 running command: : running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

.: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
	minikube config set WantReportErrorPrompt false
================================================================================

@squillace
Copy link

Same here: Ubuntu 18.04, minikube 30, kvm2. I'm going to try regressing to 25.0 and see what happens. No Kube For You!

@emansom
Copy link

emansom commented Oct 30, 2018

Also experiencing this issue using Minikube 30, KVM2.

@lycheng
Copy link

lycheng commented Nov 8, 2018

Ubuntu 18.04, minikube version: v0.30.0 with kvm2 has the same problem.

@ahopkins
Copy link

ahopkins commented Nov 8, 2018

why is this still an issue? It is rendering minikube useless on arguably an important platform. Is it not worthy of someone trying to fix?

I'm curious... Is anyone looking into this? It seems all the recent messages are just further reports of it not working. I would help out myself, but I lack the knowledge to tackle this.

@herrberk
Copy link

herrberk commented Nov 8, 2018

This seems to be network/firewall related to me. At work, if I'm connected to the secure company network, I get the exact same error as above no matter how many times I try. But If I switch to the insecure public network or I try at home I do not get the error and everything succeeds. By the way I am using minikube v0.30.0 with kubernetes version 1.12.0 on Windows 10 64bit v1709 with VirtualBox 5.2.8. Here is my configuration to start minikube:

Make sure all the .kube and .minikube folders are deleted and the VM is also removed before running this command:

minikube start --extra-config=apiserver.service-node-port-range=90-32000 \
    --kubernetes-version=v1.12.0 \
    --cpus 4 --memory 8192 --disk-size=50g
Starting local Kubernetes v1.12.0 cluster...
Starting VM...
Downloading Minikube ISO
 170.78 MB / 170.78 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubelet v1.12.0
Downloading kubeadm v1.12.0
Finished Downloading kubeadm v1.12.0
Finished Downloading kubelet v1.12.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

@danifv
Copy link

danifv commented Nov 12, 2018

Similarly as @herrberk, all works fine from public networks - I only get the error behind a proxy.

@srivatss
Copy link

I have the exact same problem as @herrberk and the exact same configuration

@shresthsuman
Copy link

Same error as @herrberk -
My versions - minikube v0.30.0 with kubernetes version 1.10.0
command - minikube start --vm-driver="hyperv" --memory=1024 --hyperv-virtual-switch="My New Switch"

If you find the solution please post here.

@aponce-incomm
Copy link

I was having the very same issue, running a Mint guest VMware machine on a Win10 host.
That VM had 2GB and 2 CPUs.
In Mint, I ran this command journalctl -f when minikube was starting and noticed an error of virtual box not having enough memory.
I then added 4 more GB to the RAM, and 2 more cores, and that seemed to solve the issue. It turned out to be a lack of hardware resources

@shresthsuman
Copy link

I was having the very same issue, running a Mint guest VMware machine on a Win10 host.
That VM had 2GB and 2 CPUs.
In Mint, I ran this command journalctl -f when minikube was starting and noticed an error of virtual box not having enough memory.
I then added 4 more GB to the RAM, and 2 more cores, and that seemed to solve the issue. It turned out to be a lack of hardware resources

Hi, Thanks for your reply. I tried changing my command to "minikube start --vm-driver="hyperv" --memory=8192 --cpus=4 --hyperv-virtual-switch="My New Switch" --v=7 --alsologtostderr --kubernetes-version=v1.12.0"

But I am still stuck on
image

Also, error log - Setting up kubeconfig...
I1206 11:10:11.920239 14812 config.go:125] Using kubeconfig: C:\Users\shsu.kube\config
Starting cluster components...
I1206 11:10:11.922237 14812 ssh_runner.go:80] Run with output:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI &&
sudo /usr/bin/kubeadm alpha phase addon coredns

I am on my company network.
My versions - minikube v0.30.0 with kubernetes version 1.12.0

@aponce-incomm
Copy link

@shresthsuman I'm running kubernetes in a Linux Mint. I wasn't referring to the minikube hardware, but to the host hardware, Mint in my case. The thing is that I have Windows and didn't want to install a clean Mint version in a hard drive, so I created a virtual machine using vmware instead. Is the hardware of that vm I was referring to.

I'm using virtualbox driver in a Linux environment

@viane
Copy link

viane commented Dec 10, 2018

Same error on:

OS: OSX 10.14.1 Mojave
VM Driver: Virtualbox
Minikube version: v0.30.0

CMD:

sudo minikube delete && sudo minikube start 
\ --extra-config=apiserver.Authorization.Mode=RBAC  
\ --kubernetes-version=v1.13.0 
\ --bootstrapper=kubeadm 
\ --apiserver-ips 127.0.0.1 
\ --apiserver-name localhost 
\ --logtostderr 
\ --vm-driver=virtualbox

Log:

E1209 20:37:14.620167   97351 start.go:297] Error starting cluster:  kubeadm init error
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon coredns
 running command: : running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon coredns

.: Process exited with status 1

@srinivasrk
Copy link

srinivasrk commented Dec 12, 2018

Similar error

OS: Ubuntu 18.04

$ sudo minikube start --extra-config kubelet.EnableCustomMetrics=true
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1212 19:32:37.780929    3657 start.go:297] Error starting cluster:  kubeadm init error
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
 running command: : running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

.: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]: n

@tyranron
Copy link

Same issue here (macOS Sierra 10.12.6):

+ minikube start --bootstrapper=kubeadm --kubernetes-version=v1.13.2 --vm-driver=virtualbox --disk-size=10g
Starting local Kubernetes v1.13.2 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubelet v1.13.2
Downloading kubeadm v1.13.2
Finished Downloading kubeadm v1.13.2
Finished Downloading kubelet v1.13.2
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0115 11:10:08.488050   32769 start.go:297] Error starting cluster:  kubeadm init error 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
 running command: : running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

.: Process exited with status 1

@tstromberg tstromberg removed the top5 label Jan 22, 2019
@tstromberg
Copy link
Contributor

With minikube v0.33, we've presented kubeadm errors directly to the console, removing the mystery and fulfilling this bug. If you run into any further kubeadm issues, please open a new bug. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kubeadm Issues relating to kubeadm co/virtualbox kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests