You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0415 08:57:45.471017 23012 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I0415 09:01:49.882697 23012 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (4m4.41134862s)
W0415 09:01:49.884224 23012 out.go:146] \U0001f4a2 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 08:57:45.523765 23193 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 08:57:49.344925 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 08:57:49.345970 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
\U0001f4a2 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 08:57:45.523765 23193 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 08:57:49.344925 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 08:57:49.345970 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0415 09:01:53.176100 23012 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I0415 09:05:55.890345 23012 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (4m2.714133155s)
I0415 09:05:55.890664 23012 kubeadm.go:326] StartCluster complete in 8m10.589439357s
I0415 09:05:55.891053 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0415 09:05:55.957360 23012 logs.go:206] 1 containers: [eb97b0f35907]
I0415 09:05:55.957499 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0415 09:05:56.018273 23012 logs.go:206] 1 containers: [fdaa9dd61930]
I0415 09:05:56.018397 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0415 09:05:56.080144 23012 logs.go:206] 0 containers: []
W0415 09:05:56.080186 23012 logs.go:208] No container was found matching "coredns"
I0415 09:05:56.080326 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0415 09:05:56.136665 23012 logs.go:206] 1 containers: [103c95ab9d83]
I0415 09:05:56.136763 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0415 09:05:56.181203 23012 logs.go:206] 0 containers: []
W0415 09:05:56.181249 23012 logs.go:208] No container was found matching "kube-proxy"
I0415 09:05:56.181326 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0415 09:05:56.227708 23012 logs.go:206] 0 containers: []
W0415 09:05:56.227740 23012 logs.go:208] No container was found matching "kubernetes-dashboard"
I0415 09:05:56.227797 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0415 09:05:56.285284 23012 logs.go:206] 0 containers: []
W0415 09:05:56.285319 23012 logs.go:208] No container was found matching "storage-provisioner"
I0415 09:05:56.285372 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0415 09:05:56.341446 23012 logs.go:206] 1 containers: [3a93b14948dd]
I0415 09:05:56.341518 23012 logs.go:120] Gathering logs for dmesg ...
I0415 09:05:56.341548 23012 exec_runner.go:49] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0415 09:05:56.352432 23012 logs.go:120] Gathering logs for describe nodes ...
I0415 09:05:56.352486 23012 exec_runner.go:49] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.13/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0415 09:05:56.601780 23012 logs.go:120] Gathering logs for kube-apiserver [eb97b0f35907] ...
I0415 09:05:56.601885 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 eb97b0f35907"
I0415 09:05:56.664727 23012 logs.go:120] Gathering logs for etcd [fdaa9dd61930] ...
I0415 09:05:56.664767 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 fdaa9dd61930"
I0415 09:05:56.714681 23012 logs.go:120] Gathering logs for kube-scheduler [103c95ab9d83] ...
I0415 09:05:56.714741 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 103c95ab9d83"
I0415 09:05:56.778335 23012 logs.go:120] Gathering logs for Docker ...
I0415 09:05:56.778381 23012 exec_runner.go:49] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0415 09:05:56.861958 23012 logs.go:120] Gathering logs for kubelet ...
I0415 09:05:56.862001 23012 exec_runner.go:49] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0415 09:05:56.902703 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:19 host-11-1-1-131 kubelet[23794]: E0415 08:58:19.957725 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.902977 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:20 host-11-1-1-131 kubelet[23794]: E0415 08:58:20.968952 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.903446 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:48 host-11-1-1-131 kubelet[23794]: E0415 08:58:48.220450 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.903689 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:50 host-11-1-1-131 kubelet[23794]: E0415 08:58:50.775939 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.903932 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:01 host-11-1-1-131 kubelet[23794]: E0415 08:59:01.701709 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.904386 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:28 host-11-1-1-131 kubelet[23794]: E0415 08:59:28.499973 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.904630 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:30 host-11-1-1-131 kubelet[23794]: E0415 08:59:30.778693 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.904880 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:45 host-11-1-1-131 kubelet[23794]: E0415 08:59:45.701931 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.905120 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:59 host-11-1-1-131 kubelet[23794]: E0415 08:59:59.701496 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.905858 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:23 host-11-1-1-131 kubelet[23794]: E0415 09:00:23.878684 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906093 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:30 host-11-1-1-131 kubelet[23794]: E0415 09:00:30.777131 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906342 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:41 host-11-1-1-131 kubelet[23794]: E0415 09:00:41.701455 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906600 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:53 host-11-1-1-131 kubelet[23794]: E0415 09:00:53.701338 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906840 23012 logs.go:135] Found kubelet problem: Apr 15 09:01:05 host-11-1-1-131 kubelet[23794]: E0415 09:01:05.702558 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.907077 23012 logs.go:135] Found kubelet problem: Apr 15 09:01:18 host-11-1-1-131 kubelet[23794]: E0415 09:01:18.702656 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.907323 23012 logs.go:135] Found kubelet problem: Apr 15 09:01:33 host-11-1-1-131 kubelet[23794]: E0415 09:01:33.702241 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.926111 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:23 host-11-1-1-131 kubelet[27411]: E0415 09:02:23.730374 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.926360 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:25 host-11-1-1-131 kubelet[27411]: E0415 09:02:25.073294 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.926777 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:49 host-11-1-1-131 kubelet[27411]: E0415 09:02:49.929526 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927004 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:55 host-11-1-1-131 kubelet[27411]: E0415 09:02:55.073399 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927243 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:07 host-11-1-1-131 kubelet[27411]: E0415 09:03:07.469672 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927655 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:34 host-11-1-1-131 kubelet[27411]: E0415 09:03:34.230118 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927887 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:35 host-11-1-1-131 kubelet[27411]: E0415 09:03:35.244415 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.928123 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:46 host-11-1-1-131 kubelet[27411]: E0415 09:03:46.468153 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.928353 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:59 host-11-1-1-131 kubelet[27411]: E0415 09:03:59.469525 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.928581 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:10 host-11-1-1-131 kubelet[27411]: E0415 09:04:10.468197 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.929420 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:34 host-11-1-1-131 kubelet[27411]: E0415 09:04:34.636909 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.929645 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:35 host-11-1-1-131 kubelet[27411]: E0415 09:04:35.650545 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.929871 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:49 host-11-1-1-131 kubelet[27411]: E0415 09:04:49.468657 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930098 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:02 host-11-1-1-131 kubelet[27411]: E0415 09:05:02.468905 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930339 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:16 host-11-1-1-131 kubelet[27411]: E0415 09:05:16.468480 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930564 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:31 host-11-1-1-131 kubelet[27411]: E0415 09:05:31.468463 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930816 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:46 host-11-1-1-131 kubelet[27411]: E0415 09:05:46.468566 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:56.930834 23012 logs.go:120] Gathering logs for kube-controller-manager [3a93b14948dd] ...
I0415 09:05:56.930856 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 3a93b14948dd"
I0415 09:05:56.995067 23012 logs.go:120] Gathering logs for container status ...
I0415 09:05:56.995174 23012 exec_runner.go:49] Run: /bin/bash -c "sudo which crictl || echo crictl ps -a || sudo docker ps -a"
W0415 09:05:57.081367 23012 out.go:258] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0415 09:05:57.081561 23012 out.go:146]
W0415 09:05:57.081853 23012 out.go:146] \U0001f4a3 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
\U0001f4a3 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0415 09:05:57.082015 23012 out.go:146]
W0415 09:05:57.082057 23012 out.go:146] \U0001f63f minikube is exiting due to an error. If the above message is not useful, open an issue:
\U0001f63f minikube is exiting due to an error. If the above message is not useful, open an issue:
W0415 09:05:57.082095 23012 out.go:146] \U0001f449 https://github.com/kubernetes/minikube/issues/new/choose
\U0001f449 https://github.com/kubernetes/minikube/issues/new/choose
I0415 09:05:57.102944 23012 out.go:110] \u274c Problems detected in kubelet:
\u274c Problems detected in kubelet:
I0415 09:05:57.104614 23012 out.go:110] Apr 15 08:58:19 host-11-1-1-131 kubelet[23794]: E0415 08:58:19.957725 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 15 08:58:19 host-11-1-1-131 kubelet[23794]: E0415 08:58:19.957725 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:57.106793 23012 out.go:110] Apr 15 08:58:20 host-11-1-1-131 kubelet[23794]: E0415 08:58:20.968952 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 15 08:58:20 host-11-1-1-131 kubelet[23794]: E0415 08:58:20.968952 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:57.108314 23012 out.go:110] Apr 15 08:58:48 host-11-1-1-131 kubelet[23794]: E0415 08:58:48.220450 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 15 08:58:48 host-11-1-1-131 kubelet[23794]: E0415 08:58:48.220450 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:57.112633 23012 out.go:110]
W0415 09:05:57.112984 23012 out.go:146] \u274c Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
\u274c Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0415 09:05:57.113923 23012 out.go:146] \U0001f4a1 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
\U0001f4a1 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0415 09:05:57.114057 23012 out.go:146] \U0001f37f Related issue: #4172
\U0001f37f Related issue: #4172
==> container status <==
sudo: crictl: command not found
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fce37d85888b a0f70a7cf739 "kube-controller-man\u2026" 45 seconds ago Exited (255) 32 seconds ago k8s_kube-controller-manager_kube-controller-manager-host-11-1-1-131_kube-system_fee973aa24e6d51c26e210ab99143c53_192
eb97b0f35907 8836b0d760bf "kube-apiserver --ad\u2026" 17 hours ago Up 17 hours k8s_kube-apiserver_kube-apiserver-host-11-1-1-131_kube-system_36fcc5100cf08c6511a396460b1517df_0
fdaa9dd61930 303ce5db0e90 "etcd --advertise-cl\u2026" 17 hours ago Up 17 hours k8s_etcd_etcd-host-11-1-1-131_kube-system_13c9eb656f9d7ef837d20a1548070b92_0
103c95ab9d83 ef5be715de1b "kube-scheduler --au\u2026" 17 hours ago Up 17 hours k8s_kube-scheduler_kube-scheduler-host-11-1-1-131_kube-system_b5039a93231442166cf93bb19d0a590b_0
124b198685e3 k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-controller-manager-host-11-1-1-131_kube-system_fee973aa24e6d51c26e210ab99143c53_0
954559d8dcb5 k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-apiserver-host-11-1-1-131_kube-system_36fcc5100cf08c6511a396460b1517df_0
713058bd0863 k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_etcd-host-11-1-1-131_kube-system_13c9eb656f9d7ef837d20a1548070b92_0
46ab68bfbe2e k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-scheduler-host-11-1-1-131_kube-system_b5039a93231442166cf93bb19d0a590b_0
66eeb122b45b bbn-fn-ams-docker-local.artifactory-blr1.int.net.nokia.com/snmpmanager:9.7.07-436653 "/bin/sh -c /bin/sta\u2026" 2 weeks ago Exited (137) 13 days ago amsapp
f09993d6f14c mariadb "docker-entrypoint.s\u2026" 2 weeks ago Exited (137) 13 days ago amsdb
Hi @nvanhon, we haven't heard back from you, do you still have this issue?
There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
Steps to reproduce the issue:
Full output of failed command:
sudo minikube start --vm-driver=none --extra-config=kubelet.serialize-image-pulls=false --extra-config=kubelet.cgroup-driver=systemd --kubernetes-version v1.18.13 --v=5 --alsologtostderr
Full output of
minikube start
command used, if not already included:root@host-11-1-1-131:/etc/kubernetes# sudo minikube start --vm-driver=none --extra-config=kubelet.serialize-image-pulls=false --extra-config=kubelet.cgroup-driver=systemd --kubernetes-version v1.18.13 --v=5 --alsologtostderr
I0415 08:57:43.262918 23012 out.go:192] Setting JSON to false
I0415 08:57:43.264675 23012 start.go:103] hostinfo: {"hostname":"host-11-1-1-131","uptime":879181,"bootTime":1617597882,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-48-generic","virtualizationSystem":"","virtualizationRole":"","hostid":"d3d37b5c-ccb0-4122-8d46-c68414f952af"}
I0415 08:57:43.265163 23012 start.go:113] virtualization:
I0415 08:57:43.306627 23012 out.go:110] \U0001f604 minikube v1.14.2 on Ubuntu 18.04
\U0001f604 minikube v1.14.2 on Ubuntu 18.04
I0415 08:57:43.306893 23012 driver.go:288] Setting default libvirt URI to qemu:///system
I0415 08:57:43.331162 23012 out.go:110] \u2728 Using the none driver based on user configuration
\u2728 Using the none driver based on user configuration
I0415 08:57:43.331228 23012 start.go:272] selected driver: none
I0415 08:57:43.331253 23012 start.go:680] validating driver "none" against
I0415 08:57:43.331291 23012 start.go:691] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:}
I0415 08:57:43.331363 23012 start.go:1143] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
I0415 08:57:43.331531 23012 start_flags.go:228] no existing cluster config was found, will generate one from the flags
I0415 08:57:43.332411 23012 start_flags.go:246] Using suggested 4000MB memory alloc based on sys=16039MB, container=0MB
I0415 08:57:43.332607 23012 start_flags.go:631] Wait components to verify : map[apiserver:true system_pods:true]
I0415 08:57:43.332650 23012 cni.go:74] Creating CNI manager for ""
I0415 08:57:43.332672 23012 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0415 08:57:43.332693 23012 start_flags.go:358] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.13 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:serialize-image-pulls Value:false} {Component:kubelet Key:cgroup-driver Value:systemd} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I0415 08:57:43.353568 23012 out.go:110] \U0001f44d Starting control plane node minikube in cluster minikube
\U0001f44d Starting control plane node minikube in cluster minikube
I0415 08:57:43.354142 23012 profile.go:150] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0415 08:57:43.354226 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:43.354670 23012 cache.go:182] Successfully downloaded all kic artifacts
I0415 08:57:43.354738 23012 start.go:314] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
I0415 08:57:43.354850 23012 start.go:318] acquired machines lock for "minikube" in 82.046µs
I0415 08:57:43.354885 23012 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.13 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:serialize-image-pulls Value:false} {Component:kubelet Key:cgroup-driver Value:systemd} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.13 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]} &{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.13 ControlPlane:true Worker:true}
I0415 08:57:43.355009 23012 start.go:127] createHost starting for "m01" (driver="none")
I0415 08:57:43.363547 23012 out.go:110] \U0001f939 Running on localhost (CPUs=8, Memory=16039MB, Disk=302386MB) ...
\U0001f939 Running on localhost (CPUs=8, Memory=16039MB, Disk=302386MB) ...
I0415 08:57:43.363651 23012 exec_runner.go:49] Run: systemctl --version
I0415 08:57:43.371156 23012 start.go:164] libmachine.API.Create for "minikube" (driver="none")
I0415 08:57:43.371229 23012 client.go:165] LocalClient.Create starting
I0415 08:57:43.371285 23012 main.go:119] libmachine: Reading certificate data from /root/.minikube/certs/ca.pem
I0415 08:57:43.371327 23012 main.go:119] libmachine: Decoding PEM data...
I0415 08:57:43.371377 23012 main.go:119] libmachine: Parsing certificate...
I0415 08:57:43.371551 23012 main.go:119] libmachine: Reading certificate data from /root/.minikube/certs/cert.pem
I0415 08:57:43.371583 23012 main.go:119] libmachine: Decoding PEM data...
I0415 08:57:43.371603 23012 main.go:119] libmachine: Parsing certificate...
I0415 08:57:43.371995 23012 client.go:168] LocalClient.Create took 744.685µs
I0415 08:57:43.372033 23012 start.go:172] duration metric: libmachine.API.Create for "minikube" took 881.662µs
I0415 08:57:43.372050 23012 start.go:268] post-start starting for "minikube" (driver="none")
I0415 08:57:43.372066 23012 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0415 08:57:43.372122 23012 exec_runner.go:49] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0415 08:57:43.381425 23012 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0415 08:57:43.381476 23012 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0415 08:57:43.381501 23012 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0415 08:57:43.436777 23012 out.go:110] \u2139\ufe0f OS release is Ubuntu 18.04.2 LTS
\u2139\ufe0f OS release is Ubuntu 18.04.2 LTS
I0415 08:57:43.436852 23012 filesync.go:118] Scanning /root/.minikube/addons for local assets ...
I0415 08:57:43.436958 23012 filesync.go:118] Scanning /root/.minikube/files for local assets ...
I0415 08:57:43.437007 23012 start.go:271] post-start completed in 64.939651ms
I0415 08:57:43.441036 23012 profile.go:150] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0415 08:57:43.441210 23012 start.go:130] duration metric: createHost completed in 86.180538ms
I0415 08:57:43.441252 23012 start.go:81] releasing machines lock for "minikube", held for 86.379922ms
I0415 08:57:43.488963 23012 out.go:110] \U0001f310 Found network options:
\U0001f310 Found network options:
I0415 08:57:43.491933 23012 out.go:110] \u25aa NO_PROXY=135.249.163.131,11.1.1.131,30.1.1.100,10.96.0.1,10.96.0.10,10.32.0.0/12
\u25aa NO_PROXY=135.249.163.131,11.1.1.131,30.1.1.100,10.96.0.1,10.96.0.10,10.32.0.0/12
I0415 08:57:43.518314 23012 out.go:110] \u25aa http_proxy=http://10.158.100.6:8080
\u25aa http_proxy=http://10.158.100.6:8080
I0415 08:57:43.522340 23012 out.go:110] \u25aa https_proxy=http://10.158.100.6:8080
\u25aa https_proxy=http://10.158.100.6:8080
I0415 08:57:43.525733 23012 out.go:110] \u25aa no_proxy=135.249.163.131,11.1.1.131,30.1.1.100,10.96.0.1,10.96.0.10,10.32.0.0/12
\u25aa no_proxy=135.249.163.131,11.1.1.131,30.1.1.100,10.96.0.1,10.96.0.10,10.32.0.0/12
I0415 08:57:43.525843 23012 exec_runner.go:49] Run: sudo systemctl is-active --quiet service containerd
I0415 08:57:43.526400 23012 exec_runner.go:49] Run: curl -sS -m 2 https://k8s.gcr.io/
I0415 08:57:43.540013 23012 exec_runner.go:49] Run: sudo systemctl cat docker.service
I0415 08:57:43.549721 23012 exec_runner.go:49] Run: sudo systemctl daemon-reload
I0415 08:57:43.748444 23012 exec_runner.go:49] Run: sudo systemctl start docker
I0415 08:57:43.761290 23012 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
I0415 08:57:43.848142 23012 out.go:110] \U0001f433 Preparing Kubernetes v1.18.13 on Docker 19.03.13 ...
\U0001f433 Preparing Kubernetes v1.18.13 on Docker 19.03.13 ...
I0415 08:57:43.851172 23012 out.go:110] \u25aa env NO_PROXY=135.249.163.131,11.1.1.131,30.1.1.100,10.96.0.1,10.96.0.10,10.32.0.0/12
\u25aa env NO_PROXY=135.249.163.131,11.1.1.131,30.1.1.100,10.96.0.1,10.96.0.10,10.32.0.0/12
I0415 08:57:43.852623 23012 out.go:110] \u25aa env HTTP_PROXY=http://10.158.100.6:8080
\u25aa env HTTP_PROXY=http://10.158.100.6:8080
I0415 08:57:43.854912 23012 out.go:110] \u25aa env HTTPS_PROXY=http://10.158.100.6:8080
\u25aa env HTTPS_PROXY=http://10.158.100.6:8080
I0415 08:57:43.855010 23012 exec_runner.go:49] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts
I0415 08:57:43.860306 23012 out.go:110] \u25aa kubelet.serialize-image-pulls=false
\u25aa kubelet.serialize-image-pulls=false
I0415 08:57:43.862727 23012 out.go:110] \u25aa kubelet.cgroup-driver=systemd
\u25aa kubelet.cgroup-driver=systemd
I0415 08:57:43.864775 23012 out.go:110] \u25aa kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
\u25aa kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I0415 08:57:43.864902 23012 preload.go:97] Checking if preload exists for k8s version v1.18.13 and runtime docker
W0415 08:57:44.491225 23012 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v6-v1.18.13-docker-overlay2-amd64.tar.lz4 status code: 404
I0415 08:57:44.491550 23012 exec_runner.go:49] Run: docker info --format {{.CgroupDriver}}
I0415 08:57:44.617001 23012 cni.go:74] Creating CNI manager for ""
I0415 08:57:44.617043 23012 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0415 08:57:44.617076 23012 kubeadm.go:84] Using pod CIDR:
I0415 08:57:44.617110 23012 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:11.1.1.131 APIServerPort:8443 KubernetesVersion:v1.18.13 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:host-11-1-1-131 DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "11.1.1.131"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:11.1.1.131 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0415 08:57:44.617387 23012 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 11.1.1.131
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "host-11-1-1-131"
kubeletExtraArgs:
node-ip: 11.1.1.131
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "11.1.1.131"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.13
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 11.1.1.131:10249
I0415 08:57:44.617635 23012 kubeadm.go:822] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.13/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=host-11-1-1-131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=11.1.1.131 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false
[Install]
config:
{KubernetesVersion:v1.18.13 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:serialize-image-pulls Value:false} {Component:kubelet Key:cgroup-driver Value:systemd} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0415 08:57:44.617776 23012 exec_runner.go:49] Run: sudo ls /var/lib/minikube/binaries/v1.18.13
I0415 08:57:44.626340 23012 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.13: exit status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.18.13': No such file or directory
Initiating transfer...
I0415 08:57:44.626463 23012 exec_runner.go:49] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.13
I0415 08:57:44.633518 23012 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.13/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.13/bin/linux/amd64/kubectl.sha256
I0415 08:57:44.633674 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/cache/linux/v1.18.13/kubectl -> /var/lib/minikube/binaries/v1.18.13/kubectl
I0415 08:57:44.633611 23012 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.13/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.13/bin/linux/amd64/kubelet.sha256
I0415 08:57:44.633768 23012 exec_runner.go:98] cp: /root/.minikube/cache/linux/v1.18.13/kubectl --> /var/lib/minikube/binaries/v1.18.13/kubectl (43986944 bytes)
I0415 08:57:44.633775 23012 exec_runner.go:49] Run: sudo systemctl is-active --quiet service kubelet
I0415 08:57:44.633633 23012 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.13/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.13/bin/linux/amd64/kubeadm.sha256
I0415 08:57:44.633854 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/cache/linux/v1.18.13/kubeadm -> /var/lib/minikube/binaries/v1.18.13/kubeadm
I0415 08:57:44.633890 23012 exec_runner.go:98] cp: /root/.minikube/cache/linux/v1.18.13/kubeadm --> /var/lib/minikube/binaries/v1.18.13/kubeadm (39772160 bytes)
I0415 08:57:44.648185 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/cache/linux/v1.18.13/kubelet -> /var/lib/minikube/binaries/v1.18.13/kubelet
I0415 08:57:44.648268 23012 exec_runner.go:98] cp: /root/.minikube/cache/linux/v1.18.13/kubelet --> /var/lib/minikube/binaries/v1.18.13/kubelet (113247064 bytes)
I0415 08:57:44.814861 23012 exec_runner.go:49] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0415 08:57:44.820654 23012 exec_runner.go:91] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0415 08:57:44.820735 23012 exec_runner.go:98] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (441 bytes)
I0415 08:57:44.820810 23012 exec_runner.go:91] found /lib/systemd/system/kubelet.service, removing ...
I0415 08:57:44.820846 23012 exec_runner.go:98] cp: memory --> /lib/systemd/system/kubelet.service (350 bytes)
I0415 08:57:44.820886 23012 exec_runner.go:98] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (1786 bytes)
I0415 08:57:44.820953 23012 exec_runner.go:49] Run: grep 11.1.1.131 control-plane.minikube.internal$ /etc/hosts
I0415 08:57:44.822264 23012 certs.go:52] Setting up /root/.minikube/profiles/minikube for IP: 11.1.1.131
I0415 08:57:44.822305 23012 certs.go:169] skipping minikubeCA CA generation: /root/.minikube/ca.key
I0415 08:57:44.822329 23012 certs.go:169] skipping proxyClientCA CA generation: /root/.minikube/proxy-client-ca.key
I0415 08:57:44.822383 23012 certs.go:273] generating minikube-user signed cert: /root/.minikube/profiles/minikube/client.key
I0415 08:57:44.822400 23012 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/client.crt with IP's: []
I0415 08:57:44.957893 23012 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/client.crt ...
I0415 08:57:44.957948 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/client.crt: {Name:mk09878e812b07af637940656ec44996daba95aa Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:44.958139 23012 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/client.key ...
I0415 08:57:44.958156 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/client.key: {Name:mkf3b978f9858871583d8228f83a87a85b7d106f Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:44.958232 23012 certs.go:273] generating minikube signed cert: /root/.minikube/profiles/minikube/apiserver.key.62a06e7a
I0415 08:57:44.958242 23012 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/apiserver.crt.62a06e7a with IP's: [11.1.1.131 10.96.0.1 127.0.0.1 10.0.0.1]
I0415 08:57:45.165373 23012 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/apiserver.crt.62a06e7a ...
I0415 08:57:45.165420 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/apiserver.crt.62a06e7a: {Name:mkbabc99ece9bb84fca657395b476f3339cd2678 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:45.165618 23012 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/apiserver.key.62a06e7a ...
I0415 08:57:45.165634 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/apiserver.key.62a06e7a: {Name:mkbd5aa045673e53546d49a31ebf8cc1277be131 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:45.165722 23012 certs.go:284] copying /root/.minikube/profiles/minikube/apiserver.crt.62a06e7a -> /root/.minikube/profiles/minikube/apiserver.crt
I0415 08:57:45.165784 23012 certs.go:288] copying /root/.minikube/profiles/minikube/apiserver.key.62a06e7a -> /root/.minikube/profiles/minikube/apiserver.key
I0415 08:57:45.165823 23012 certs.go:273] generating aggregator signed cert: /root/.minikube/profiles/minikube/proxy-client.key
I0415 08:57:45.165831 23012 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0415 08:57:45.271448 23012 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/proxy-client.crt ...
I0415 08:57:45.271495 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcab3ddb18cd096d978df14d87a44e804896057 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:45.271696 23012 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/proxy-client.key ...
I0415 08:57:45.271709 23012 lock.go:36] WriteFile acquiring /root/.minikube/profiles/minikube/proxy-client.key: {Name:mkaff5bf6f623f02423597918f5f33c2a99a3db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0415 08:57:45.271787 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0415 08:57:45.271805 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0415 08:57:45.271821 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0415 08:57:45.271836 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0415 08:57:45.271848 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0415 08:57:45.271857 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0415 08:57:45.271869 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0415 08:57:45.271879 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0415 08:57:45.271934 23012 certs.go:348] found cert: /root/.minikube/certs/root/.minikube/certs/ca-key.pem (1675 bytes)
I0415 08:57:45.271999 23012 certs.go:348] found cert: /root/.minikube/certs/root/.minikube/certs/ca.pem (1070 bytes)
I0415 08:57:45.272045 23012 certs.go:348] found cert: /root/.minikube/certs/root/.minikube/certs/cert.pem (1115 bytes)
I0415 08:57:45.272076 23012 certs.go:348] found cert: /root/.minikube/certs/root/.minikube/certs/key.pem (1679 bytes)
I0415 08:57:45.272114 23012 vm_assets.go:96] NewFileAsset: /root/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0415 08:57:45.274011 23012 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0415 08:57:45.275010 23012 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0415 08:57:45.275494 23012 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0415 08:57:45.275570 23012 exec_runner.go:98] cp: /root/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0415 08:57:45.275640 23012 exec_runner.go:98] cp: /root/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0415 08:57:45.275734 23012 exec_runner.go:98] cp: /root/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0415 08:57:45.275777 23012 exec_runner.go:98] cp: /root/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0415 08:57:45.275823 23012 exec_runner.go:98] cp: /root/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0415 08:57:45.275874 23012 exec_runner.go:91] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0415 08:57:45.275923 23012 exec_runner.go:98] cp: /root/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0415 08:57:45.275962 23012 exec_runner.go:98] cp: memory --> /var/lib/minikube/kubeconfig (398 bytes)
I0415 08:57:45.276026 23012 exec_runner.go:49] Run: openssl version
I0415 08:57:45.280295 23012 exec_runner.go:49] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0415 08:57:45.288626 23012 exec_runner.go:49] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0415 08:57:45.290316 23012 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Apr 15 08:57 /usr/share/ca-certificates/minikubeCA.pem
I0415 08:57:45.290377 23012 exec_runner.go:49] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0415 08:57:45.293489 23012 exec_runner.go:49] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0415 08:57:45.301239 23012 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.13 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubelet Key:serialize-image-pulls Value:false} {Component:kubelet Key:cgroup-driver Value:systemd} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:11.1.1.131 Port:8443 KubernetesVersion:v1.18.13 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I0415 08:57:45.301426 23012 exec_runner.go:49] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0415 08:57:45.372637 23012 exec_runner.go:49] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0415 08:57:45.380033 23012 exec_runner.go:49] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0415 08:57:45.389776 23012 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
I0415 08:57:45.462509 23012 exec_runner.go:49] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0415 08:57:45.470923 23012 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0415 08:57:45.471017 23012 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I0415 09:01:49.882697 23012 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (4m4.41134862s)
W0415 09:01:49.884224 23012 out.go:146] \U0001f4a2 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 08:57:45.523765 23193 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 08:57:49.344925 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 08:57:49.345970 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
\U0001f4a2 initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host-11-1-1-131 localhost] and IPs [11.1.1.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 08:57:45.523765 23193 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 08:57:49.344925 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 08:57:49.345970 23193 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0415 09:01:49.884911 23012 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0415 09:01:53.028262 23012 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.143276841s)
I0415 09:01:53.028375 23012 exec_runner.go:49] Run: sudo systemctl stop -f kubelet
I0415 09:01:53.045681 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0415 09:01:53.100179 23012 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
I0415 09:01:53.169920 23012 exec_runner.go:49] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0415 09:01:53.176011 23012 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0415 09:01:53.176100 23012 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I0415 09:05:55.890345 23012 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (4m2.714133155s)
I0415 09:05:55.890664 23012 kubeadm.go:326] StartCluster complete in 8m10.589439357s
I0415 09:05:55.891053 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0415 09:05:55.957360 23012 logs.go:206] 1 containers: [eb97b0f35907]
I0415 09:05:55.957499 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0415 09:05:56.018273 23012 logs.go:206] 1 containers: [fdaa9dd61930]
I0415 09:05:56.018397 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0415 09:05:56.080144 23012 logs.go:206] 0 containers: []
W0415 09:05:56.080186 23012 logs.go:208] No container was found matching "coredns"
I0415 09:05:56.080326 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0415 09:05:56.136665 23012 logs.go:206] 1 containers: [103c95ab9d83]
I0415 09:05:56.136763 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0415 09:05:56.181203 23012 logs.go:206] 0 containers: []
W0415 09:05:56.181249 23012 logs.go:208] No container was found matching "kube-proxy"
I0415 09:05:56.181326 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0415 09:05:56.227708 23012 logs.go:206] 0 containers: []
W0415 09:05:56.227740 23012 logs.go:208] No container was found matching "kubernetes-dashboard"
I0415 09:05:56.227797 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0415 09:05:56.285284 23012 logs.go:206] 0 containers: []
W0415 09:05:56.285319 23012 logs.go:208] No container was found matching "storage-provisioner"
I0415 09:05:56.285372 23012 exec_runner.go:49] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0415 09:05:56.341446 23012 logs.go:206] 1 containers: [3a93b14948dd]
I0415 09:05:56.341518 23012 logs.go:120] Gathering logs for dmesg ...
I0415 09:05:56.341548 23012 exec_runner.go:49] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0415 09:05:56.352432 23012 logs.go:120] Gathering logs for describe nodes ...
I0415 09:05:56.352486 23012 exec_runner.go:49] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.13/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0415 09:05:56.601780 23012 logs.go:120] Gathering logs for kube-apiserver [eb97b0f35907] ...
I0415 09:05:56.601885 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 eb97b0f35907"
I0415 09:05:56.664727 23012 logs.go:120] Gathering logs for etcd [fdaa9dd61930] ...
I0415 09:05:56.664767 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 fdaa9dd61930"
I0415 09:05:56.714681 23012 logs.go:120] Gathering logs for kube-scheduler [103c95ab9d83] ...
I0415 09:05:56.714741 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 103c95ab9d83"
I0415 09:05:56.778335 23012 logs.go:120] Gathering logs for Docker ...
I0415 09:05:56.778381 23012 exec_runner.go:49] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0415 09:05:56.861958 23012 logs.go:120] Gathering logs for kubelet ...
I0415 09:05:56.862001 23012 exec_runner.go:49] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0415 09:05:56.902703 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:19 host-11-1-1-131 kubelet[23794]: E0415 08:58:19.957725 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.902977 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:20 host-11-1-1-131 kubelet[23794]: E0415 08:58:20.968952 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.903446 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:48 host-11-1-1-131 kubelet[23794]: E0415 08:58:48.220450 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.903689 23012 logs.go:135] Found kubelet problem: Apr 15 08:58:50 host-11-1-1-131 kubelet[23794]: E0415 08:58:50.775939 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.903932 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:01 host-11-1-1-131 kubelet[23794]: E0415 08:59:01.701709 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.904386 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:28 host-11-1-1-131 kubelet[23794]: E0415 08:59:28.499973 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.904630 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:30 host-11-1-1-131 kubelet[23794]: E0415 08:59:30.778693 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.904880 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:45 host-11-1-1-131 kubelet[23794]: E0415 08:59:45.701931 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.905120 23012 logs.go:135] Found kubelet problem: Apr 15 08:59:59 host-11-1-1-131 kubelet[23794]: E0415 08:59:59.701496 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.905858 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:23 host-11-1-1-131 kubelet[23794]: E0415 09:00:23.878684 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906093 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:30 host-11-1-1-131 kubelet[23794]: E0415 09:00:30.777131 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906342 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:41 host-11-1-1-131 kubelet[23794]: E0415 09:00:41.701455 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906600 23012 logs.go:135] Found kubelet problem: Apr 15 09:00:53 host-11-1-1-131 kubelet[23794]: E0415 09:00:53.701338 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.906840 23012 logs.go:135] Found kubelet problem: Apr 15 09:01:05 host-11-1-1-131 kubelet[23794]: E0415 09:01:05.702558 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.907077 23012 logs.go:135] Found kubelet problem: Apr 15 09:01:18 host-11-1-1-131 kubelet[23794]: E0415 09:01:18.702656 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.907323 23012 logs.go:135] Found kubelet problem: Apr 15 09:01:33 host-11-1-1-131 kubelet[23794]: E0415 09:01:33.702241 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.926111 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:23 host-11-1-1-131 kubelet[27411]: E0415 09:02:23.730374 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.926360 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:25 host-11-1-1-131 kubelet[27411]: E0415 09:02:25.073294 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.926777 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:49 host-11-1-1-131 kubelet[27411]: E0415 09:02:49.929526 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927004 23012 logs.go:135] Found kubelet problem: Apr 15 09:02:55 host-11-1-1-131 kubelet[27411]: E0415 09:02:55.073399 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927243 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:07 host-11-1-1-131 kubelet[27411]: E0415 09:03:07.469672 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927655 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:34 host-11-1-1-131 kubelet[27411]: E0415 09:03:34.230118 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.927887 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:35 host-11-1-1-131 kubelet[27411]: E0415 09:03:35.244415 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.928123 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:46 host-11-1-1-131 kubelet[27411]: E0415 09:03:46.468153 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.928353 23012 logs.go:135] Found kubelet problem: Apr 15 09:03:59 host-11-1-1-131 kubelet[27411]: E0415 09:03:59.469525 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.928581 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:10 host-11-1-1-131 kubelet[27411]: E0415 09:04:10.468197 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.929420 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:34 host-11-1-1-131 kubelet[27411]: E0415 09:04:34.636909 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.929645 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:35 host-11-1-1-131 kubelet[27411]: E0415 09:04:35.650545 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.929871 23012 logs.go:135] Found kubelet problem: Apr 15 09:04:49 host-11-1-1-131 kubelet[27411]: E0415 09:04:49.468657 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930098 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:02 host-11-1-1-131 kubelet[27411]: E0415 09:05:02.468905 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930339 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:16 host-11-1-1-131 kubelet[27411]: E0415 09:05:16.468480 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930564 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:31 host-11-1-1-131 kubelet[27411]: E0415 09:05:31.468463 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
W0415 09:05:56.930816 23012 logs.go:135] Found kubelet problem: Apr 15 09:05:46 host-11-1-1-131 kubelet[27411]: E0415 09:05:46.468566 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:56.930834 23012 logs.go:120] Gathering logs for kube-controller-manager [3a93b14948dd] ...
I0415 09:05:56.930856 23012 exec_runner.go:49] Run: /bin/bash -c "docker logs --tail 400 3a93b14948dd"
I0415 09:05:56.995067 23012 logs.go:120] Gathering logs for container status ...
I0415 09:05:56.995174 23012 exec_runner.go:49] Run: /bin/bash -c "sudo
which crictl || echo crictl
ps -a || sudo docker ps -a"W0415 09:05:57.081367 23012 out.go:258] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0415 09:05:57.081561 23012 out.go:146]
W0415 09:05:57.081853 23012 out.go:146] \U0001f4a3 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
\U0001f4a3 Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0415 09:05:57.082015 23012 out.go:146]
W0415 09:05:57.082057 23012 out.go:146] \U0001f63f minikube is exiting due to an error. If the above message is not useful, open an issue:
\U0001f63f minikube is exiting due to an error. If the above message is not useful, open an issue:
W0415 09:05:57.082095 23012 out.go:146] \U0001f449 https://github.com/kubernetes/minikube/issues/new/choose
\U0001f449 https://github.com/kubernetes/minikube/issues/new/choose
I0415 09:05:57.102944 23012 out.go:110] \u274c Problems detected in kubelet:
\u274c Problems detected in kubelet:
I0415 09:05:57.104614 23012 out.go:110] Apr 15 08:58:19 host-11-1-1-131 kubelet[23794]: E0415 08:58:19.957725 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 15 08:58:19 host-11-1-1-131 kubelet[23794]: E0415 08:58:19.957725 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:57.106793 23012 out.go:110] Apr 15 08:58:20 host-11-1-1-131 kubelet[23794]: E0415 08:58:20.968952 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 15 08:58:20 host-11-1-1-131 kubelet[23794]: E0415 08:58:20.968952 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:57.108314 23012 out.go:110] Apr 15 08:58:48 host-11-1-1-131 kubelet[23794]: E0415 08:58:48.220450 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 15 08:58:48 host-11-1-1-131 kubelet[23794]: E0415 08:58:48.220450 23794 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
I0415 09:05:57.112633 23012 out.go:110]
W0415 09:05:57.112984 23012 out.go:146] \u274c Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
\u274c Exiting due to K8S_KUBELET_NOT_RUNNING: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.13:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.13
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
stderr:
W0415 09:01:53.248306 27089 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-ethtool]: ethtool not found in system path
[WARNING FileExisting-socat]: socat not found in system path
W0415 09:01:55.354627 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0415 09:01:55.355654 27089 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0415 09:05:57.113923 23012 out.go:146] \U0001f4a1 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
\U0001f4a1 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0415 09:05:57.114057 23012 out.go:146] \U0001f37f Related issue: #4172
\U0001f37f Related issue: #4172
Optional: Full output of
minikube logs
command:==> container status <==
sudo: crictl: command not found
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fce37d85888b a0f70a7cf739 "kube-controller-man\u2026" 45 seconds ago Exited (255) 32 seconds ago k8s_kube-controller-manager_kube-controller-manager-host-11-1-1-131_kube-system_fee973aa24e6d51c26e210ab99143c53_192
eb97b0f35907 8836b0d760bf "kube-apiserver --ad\u2026" 17 hours ago Up 17 hours k8s_kube-apiserver_kube-apiserver-host-11-1-1-131_kube-system_36fcc5100cf08c6511a396460b1517df_0
fdaa9dd61930 303ce5db0e90 "etcd --advertise-cl\u2026" 17 hours ago Up 17 hours k8s_etcd_etcd-host-11-1-1-131_kube-system_13c9eb656f9d7ef837d20a1548070b92_0
103c95ab9d83 ef5be715de1b "kube-scheduler --au\u2026" 17 hours ago Up 17 hours k8s_kube-scheduler_kube-scheduler-host-11-1-1-131_kube-system_b5039a93231442166cf93bb19d0a590b_0
124b198685e3 k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-controller-manager-host-11-1-1-131_kube-system_fee973aa24e6d51c26e210ab99143c53_0
954559d8dcb5 k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-apiserver-host-11-1-1-131_kube-system_36fcc5100cf08c6511a396460b1517df_0
713058bd0863 k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_etcd-host-11-1-1-131_kube-system_13c9eb656f9d7ef837d20a1548070b92_0
46ab68bfbe2e k8s.gcr.io/pause:3.2 "/pause" 17 hours ago Up 17 hours k8s_POD_kube-scheduler-host-11-1-1-131_kube-system_b5039a93231442166cf93bb19d0a590b_0
66eeb122b45b bbn-fn-ams-docker-local.artifactory-blr1.int.net.nokia.com/snmpmanager:9.7.07-436653 "/bin/sh -c /bin/sta\u2026" 2 weeks ago Exited (137) 13 days ago amsapp
f09993d6f14c mariadb "docker-entrypoint.s\u2026" 2 weeks ago Exited (137) 13 days ago amsdb
==> describe nodes <==
Name: host-11-1-1-131
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=host-11-1-1-131
kubernetes.io/os=linux
Annotations: volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 15 Apr 2021 09:02:02 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: host-11-1-1-131
AcquireTime:
RenewTime: Fri, 16 Apr 2021 01:35:47 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Fri, 16 Apr 2021 01:32:05 +0000 Thu, 15 Apr 2021 09:02:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 16 Apr 2021 01:32:05 +0000 Thu, 15 Apr 2021 09:02:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 16 Apr 2021 01:32:05 +0000 Thu, 15 Apr 2021 09:02:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 16 Apr 2021 01:32:05 +0000 Thu, 15 Apr 2021 09:02:12 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 11.1.1.131
Hostname: host-11-1-1-131
Capacity:
cpu: 8
ephemeral-storage: 309644268Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16424652Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 309644268Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16424652Ki
pods: 110
System Info:
Machine ID: 322febc08b474564916c8755244fec11
System UUID: D3D37B5C-CCB0-4122-8D46-C68414F952AF
Boot ID: 24d66fd0-4fb7-414b-a36e-bbcd9cbf90fa
Kernel Version: 4.15.0-48-generic
OS Image: Ubuntu 18.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.13
Kubelet Version: v1.18.13
Kube-Proxy Version: v1.18.13
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system etcd-host-11-1-1-131 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16h
kube-system kube-apiserver-host-11-1-1-131 250m (3%) 0 (0%) 0 (0%) 0 (0%) 16h
kube-system kube-controller-manager-host-11-1-1-131 200m (2%) 0 (0%) 0 (0%) 0 (0%) 16h
kube-system kube-scheduler-host-11-1-1-131 100m (1%) 0 (0%) 0 (0%) 0 (0%) 16h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 550m (6%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
==> dmesg <==
[Apr 5 04:44] #2
[ +0.003964] #3
[ +0.004051] #4
[ +0.004047] #5
[ +0.003968] #6
[ +0.003930] #7
[ +0.030289] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.135089] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +0.721931] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
[ +0.045182] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +0.022413] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11
[Apr 5 04:45] kauditd_printk_skb: 16 callbacks suppressed
[Apr 5 04:47] print_req_error: I/O error, dev loop0, sector 0
[ +0.000018] SQUASHFS error: squashfs_read_data failed to read block 0x0
[ +0.000002] squashfs: SQUASHFS error: unable to read squashfs_super_block
==> etcd [fdaa9dd61930] <==
2021-04-15 23:12:00.841659 I | mvcc: finished scheduled compaction at 11418 (took 774.019µs)
2021-04-15 23:17:00.853147 I | mvcc: store.index: compact 11484
2021-04-15 23:17:00.853885 I | mvcc: finished scheduled compaction at 11484 (took 331.623µs)
2021-04-15 23:22:00.858969 I | mvcc: store.index: compact 11551
2021-04-15 23:22:00.860218 I | mvcc: finished scheduled compaction at 11551 (took 885.998µs)
2021-04-15 23:27:00.863252 I | mvcc: store.index: compact 11617
2021-04-15 23:27:00.864167 I | mvcc: finished scheduled compaction at 11617 (took 453.419µs)
2021-04-15 23:32:00.869929 I | mvcc: store.index: compact 11683
2021-04-15 23:32:00.871003 I | mvcc: finished scheduled compaction at 11683 (took 507.05µs)
2021-04-15 23:37:00.882877 I | mvcc: store.index: compact 11750
2021-04-15 23:37:00.884315 I | mvcc: finished scheduled compaction at 11750 (took 700.602µs)
2021-04-15 23:42:00.888802 I | mvcc: store.index: compact 11816
2021-04-15 23:42:00.889696 I | mvcc: finished scheduled compaction at 11816 (took 386.789µs)
2021-04-15 23:47:00.894521 I | mvcc: store.index: compact 11882
2021-04-15 23:47:00.896062 I | mvcc: finished scheduled compaction at 11882 (took 917.859µs)
2021-04-15 23:52:00.900084 I | mvcc: store.index: compact 11949
2021-04-15 23:52:00.901265 I | mvcc: finished scheduled compaction at 11949 (took 533.881µs)
2021-04-15 23:57:00.905824 I | mvcc: store.index: compact 12015
2021-04-15 23:57:00.906986 I | mvcc: finished scheduled compaction at 12015 (took 620.726µs)
2021-04-16 00:02:00.910474 I | mvcc: store.index: compact 12081
2021-04-16 00:02:00.911416 I | mvcc: finished scheduled compaction at 12081 (took 441.792µs)
2021-04-16 00:07:00.915599 I | mvcc: store.index: compact 12148
2021-04-16 00:07:00.916725 I | mvcc: finished scheduled compaction at 12148 (took 708.395µs)
2021-04-16 00:12:00.921316 I | mvcc: store.index: compact 12214
2021-04-16 00:12:00.922417 I | mvcc: finished scheduled compaction at 12214 (took 450.409µs)
2021-04-16 00:17:00.926942 I | mvcc: store.index: compact 12280
2021-04-16 00:17:00.928008 I | mvcc: finished scheduled compaction at 12280 (took 560.562µs)
2021-04-16 00:22:00.932200 I | mvcc: store.index: compact 12348
2021-04-16 00:22:00.933475 I | mvcc: finished scheduled compaction at 12348 (took 604.851µs)
2021-04-16 00:27:00.937789 I | mvcc: store.index: compact 12414
2021-04-16 00:27:00.938477 I | mvcc: finished scheduled compaction at 12414 (took 232.484µs)
2021-04-16 00:32:00.943223 I | mvcc: store.index: compact 12480
2021-04-16 00:32:00.944300 I | mvcc: finished scheduled compaction at 12480 (took 551.71µs)
2021-04-16 00:37:00.948502 I | mvcc: store.index: compact 12546
2021-04-16 00:37:00.949373 I | mvcc: finished scheduled compaction at 12546 (took 382.431µs)
2021-04-16 00:37:35.471628 I | etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)
2021-04-16 00:37:35.474786 I | etcdserver: saved snapshot at index 30003
2021-04-16 00:37:35.475254 I | etcdserver: compacted raft log at 25003
2021-04-16 00:42:00.954489 I | mvcc: store.index: compact 12610
2021-04-16 00:42:00.955701 I | mvcc: finished scheduled compaction at 12610 (took 390.989µs)
2021-04-16 00:47:00.964006 I | mvcc: store.index: compact 12676
2021-04-16 00:47:00.965473 I | mvcc: finished scheduled compaction at 12676 (took 988.482µs)
2021-04-16 00:52:00.972795 I | mvcc: store.index: compact 12743
2021-04-16 00:52:00.973707 I | mvcc: finished scheduled compaction at 12743 (took 427.152µs)
2021-04-16 00:57:00.978776 I | mvcc: store.index: compact 12809
2021-04-16 00:57:00.979898 I | mvcc: finished scheduled compaction at 12809 (took 565.588µs)
2021-04-16 01:02:00.984957 I | mvcc: store.index: compact 12875
2021-04-16 01:02:00.986933 I | mvcc: finished scheduled compaction at 12875 (took 1.280286ms)
2021-04-16 01:07:00.991610 I | mvcc: store.index: compact 12942
2021-04-16 01:07:00.992955 I | mvcc: finished scheduled compaction at 12942 (took 808.101µs)
2021-04-16 01:12:00.996629 I | mvcc: store.index: compact 13008
2021-04-16 01:12:00.997896 I | mvcc: finished scheduled compaction at 13008 (took 732.02µs)
2021-04-16 01:17:01.002328 I | mvcc: store.index: compact 13074
2021-04-16 01:17:01.003837 I | mvcc: finished scheduled compaction at 13074 (took 842.342µs)
2021-04-16 01:22:01.007377 I | mvcc: store.index: compact 13141
2021-04-16 01:22:01.008260 I | mvcc: finished scheduled compaction at 13141 (took 460.867µs)
2021-04-16 01:27:01.012310 I | mvcc: store.index: compact 13206
2021-04-16 01:27:01.013402 I | mvcc: finished scheduled compaction at 13206 (took 717.357µs)
2021-04-16 01:32:01.021887 I | mvcc: store.index: compact 13272
2021-04-16 01:32:01.022682 I | mvcc: finished scheduled compaction at 13272 (took 441.825µs)
==> kernel <==
01:35:56 up 10 days, 20:51, 0 users, load average: 0.07, 0.09, 0.09
Linux host-11-1-1-131 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 18.04.2 LTS"
==> kube-apiserver [eb97b0f35907] <==
I0415 09:02:00.579796 1 client.go:361] parsed scheme: "endpoint"
I0415 09:02:00.579843 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0415 09:02:00.588025 1 client.go:361] parsed scheme: "endpoint"
I0415 09:02:00.588048 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W0415 09:02:00.723726 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0415 09:02:00.738484 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0415 09:02:00.755799 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0415 09:02:00.783811 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0415 09:02:00.786941 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0415 09:02:00.799423 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0415 09:02:00.816471 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0415 09:02:00.816501 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0415 09:02:00.824537 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0415 09:02:00.824563 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0415 09:02:00.825915 1 client.go:361] parsed scheme: "endpoint"
I0415 09:02:00.825945 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0415 09:02:00.838628 1 client.go:361] parsed scheme: "endpoint"
I0415 09:02:00.838658 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0415 09:02:02.546328 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0415 09:02:02.546496 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0415 09:02:02.546639 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0415 09:02:02.546767 1 secure_serving.go:178] Serving securely on [::]:8443
I0415 09:02:02.546811 1 available_controller.go:387] Starting AvailableConditionController
I0415 09:02:02.546816 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0415 09:02:02.546832 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0415 09:02:02.546965 1 controller.go:81] Starting OpenAPI AggregationController
I0415 09:02:02.547131 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0415 09:02:02.547158 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0415 09:02:02.547239 1 crd_finalizer.go:266] Starting CRDFinalizer
I0415 09:02:02.547404 1 controller.go:86] Starting OpenAPI controller
I0415 09:02:02.547434 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0415 09:02:02.547459 1 naming_controller.go:291] Starting NamingConditionController
I0415 09:02:02.547488 1 establishing_controller.go:76] Starting EstablishingController
I0415 09:02:02.547521 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0415 09:02:02.547541 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0415 09:02:02.547816 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0415 09:02:02.547842 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0415 09:02:02.548583 1 autoregister_controller.go:141] Starting autoregister controller
I0415 09:02:02.548608 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0415 09:02:02.548886 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0415 09:02:02.548952 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0415 09:02:02.550036 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/11.1.1.131, ResourceVersion: 0, AdditionalErrorMsg:
I0415 09:02:02.550147 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0415 09:02:02.550171 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0415 09:02:02.647066 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0415 09:02:02.647336 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0415 09:02:02.648119 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0415 09:02:02.648811 1 cache.go:39] Caches are synced for autoregister controller
I0415 09:02:02.650392 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0415 09:02:03.546407 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0415 09:02:03.546721 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0415 09:02:03.555102 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0415 09:02:03.559810 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0415 09:02:03.559849 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0415 09:02:04.090711 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0415 09:02:04.125009 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0415 09:02:04.208203 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [11.1.1.131]
I0415 09:02:04.209219 1 controller.go:606] quota admission added evaluator for: endpoints
I0415 09:02:04.212781 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0415 09:02:12.587525 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
==> kube-controller-manager [fce37d85888b] <==
Flag --port has been deprecated, see --secure-port instead.
I0416 01:35:12.066264 1 serving.go:313] Generated self-signed cert in-memory
I0416 01:35:12.637519 1 controllermanager.go:161] Version: v1.18.13
I0416 01:35:12.638269 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0416 01:35:12.638363 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0416 01:35:12.638499 1 secure_serving.go:178] Serving securely on 127.0.0.1:10257
I0416 01:35:12.638570 1 tlsconfig.go:240] Starting DynamicServingCertificateController
W0416 01:35:12.828488 1 controllermanager.go:612] fetch api resource lists failed, use legacy client builder: Get https://control-plane.minikube.internal:8443/api/v1?timeout=32s: Gateway Timeout
F0416 01:35:23.401533 1 controllermanager.go:230] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get https://control-plane.minikube.internal:8443/healthz?timeout=32s: Gateway Timeout
==> kube-scheduler [103c95ab9d83] <==
E0416 01:32:11.816421 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:15.717338 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:21.494448 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:21.728181 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:23.437738 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:34.858128 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:44.590265 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:45.117173 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:48.202430 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:52.701714 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:52.976920 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:54.896591 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:32:58.733466 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:05.630010 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:05.756079 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:18.331864 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:20.471465 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:22.163486 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:29.651164 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:30.318806 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:30.550003 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:35.171266 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:38.635291 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:38.999758 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:44.384657 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:46.108325 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:33:51.940567 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:01.820036 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:02.553665 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:04.051548 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:06.342989 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:14.979096 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:18.288148 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:20.147337 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:21.180980 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:28.232374 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:34.435750 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:34.845270 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:37.148546 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:37.618732 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:41.326178 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:43.959895 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:44.263886 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:34:56.974487 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:03.414158 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:05.369727 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:05.456724 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:06.386853 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:07.504597 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:26.340320 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:28.312673 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:30.432632 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:31.458375 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:33.227911 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:39.228240 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:39.270370 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:47.299064 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:47.636904 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:47.689357 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: Gateway Timeout
E0416 01:35:49.662720 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: Gateway Timeout
==> kubelet <==
-- Logs begin at Sun 2021-04-04 01:40:52 UTC, end at Fri 2021-04-16 01:35:57 UTC. --
Apr 16 01:30:03 host-11-1-1-131 kubelet[27411]: I0416 01:30:03.464603 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1c2cd044a1e15cd20d1c944b1b7b74fa95d67dfb33295c165812a8e7d68ca968
Apr 16 01:30:03 host-11-1-1-131 kubelet[27411]: I0416 01:30:03.467653 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:30:03 host-11-1-1-131 kubelet[27411]: E0416 01:30:03.468449 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:30:05 host-11-1-1-131 kubelet[27411]: I0416 01:30:05.072546 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:30:05 host-11-1-1-131 kubelet[27411]: E0416 01:30:05.073743 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:30:19 host-11-1-1-131 kubelet[27411]: I0416 01:30:19.467065 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:30:19 host-11-1-1-131 kubelet[27411]: E0416 01:30:19.468348 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:30:31 host-11-1-1-131 kubelet[27411]: I0416 01:30:31.469433 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:30:31 host-11-1-1-131 kubelet[27411]: E0416 01:30:31.472101 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:30:45 host-11-1-1-131 kubelet[27411]: I0416 01:30:45.466937 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:30:45 host-11-1-1-131 kubelet[27411]: E0416 01:30:45.469799 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:30:59 host-11-1-1-131 kubelet[27411]: E0416 01:30:59.604248 27411 certificate_manager.go:451] certificate request was not signed: timed out waiting for the condition
Apr 16 01:31:00 host-11-1-1-131 kubelet[27411]: I0416 01:31:00.466821 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:31:00 host-11-1-1-131 kubelet[27411]: E0416 01:31:00.467731 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:31:13 host-11-1-1-131 kubelet[27411]: I0416 01:31:13.466992 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:31:13 host-11-1-1-131 kubelet[27411]: E0416 01:31:13.468239 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:31:27 host-11-1-1-131 kubelet[27411]: I0416 01:31:27.467055 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:31:27 host-11-1-1-131 kubelet[27411]: E0416 01:31:27.468649 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:31:40 host-11-1-1-131 kubelet[27411]: I0416 01:31:40.466740 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:31:40 host-11-1-1-131 kubelet[27411]: E0416 01:31:40.468032 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:31:55 host-11-1-1-131 kubelet[27411]: I0416 01:31:55.466935 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:31:55 host-11-1-1-131 kubelet[27411]: E0416 01:31:55.468184 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:32:07 host-11-1-1-131 kubelet[27411]: I0416 01:32:07.467030 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:32:07 host-11-1-1-131 kubelet[27411]: E0416 01:32:07.468252 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:32:20 host-11-1-1-131 kubelet[27411]: I0416 01:32:20.466889 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:32:20 host-11-1-1-131 kubelet[27411]: E0416 01:32:20.468489 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:32:32 host-11-1-1-131 kubelet[27411]: I0416 01:32:32.466826 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:32:32 host-11-1-1-131 kubelet[27411]: E0416 01:32:32.468067 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:32:45 host-11-1-1-131 kubelet[27411]: I0416 01:32:45.466984 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:32:45 host-11-1-1-131 kubelet[27411]: E0416 01:32:45.468260 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:33:00 host-11-1-1-131 kubelet[27411]: I0416 01:33:00.466961 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:33:00 host-11-1-1-131 kubelet[27411]: E0416 01:33:00.468282 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:33:14 host-11-1-1-131 kubelet[27411]: I0416 01:33:14.466722 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:33:14 host-11-1-1-131 kubelet[27411]: E0416 01:33:14.468007 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:33:26 host-11-1-1-131 kubelet[27411]: I0416 01:33:26.466925 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:33:26 host-11-1-1-131 kubelet[27411]: E0416 01:33:26.468267 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:33:39 host-11-1-1-131 kubelet[27411]: I0416 01:33:39.466610 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:33:39 host-11-1-1-131 kubelet[27411]: E0416 01:33:39.467236 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:33:53 host-11-1-1-131 kubelet[27411]: I0416 01:33:53.469381 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:33:53 host-11-1-1-131 kubelet[27411]: E0416 01:33:53.471470 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:34:08 host-11-1-1-131 kubelet[27411]: I0416 01:34:08.466915 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:34:08 host-11-1-1-131 kubelet[27411]: E0416 01:34:08.468185 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:34:19 host-11-1-1-131 kubelet[27411]: I0416 01:34:19.467138 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:34:19 host-11-1-1-131 kubelet[27411]: E0416 01:34:19.468503 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:34:31 host-11-1-1-131 kubelet[27411]: I0416 01:34:31.466622 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:34:31 host-11-1-1-131 kubelet[27411]: E0416 01:34:31.467698 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:34:44 host-11-1-1-131 kubelet[27411]: I0416 01:34:44.466916 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:34:44 host-11-1-1-131 kubelet[27411]: E0416 01:34:44.468353 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:34:58 host-11-1-1-131 kubelet[27411]: I0416 01:34:58.466767 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:34:58 host-11-1-1-131 kubelet[27411]: E0416 01:34:58.468148 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:35:11 host-11-1-1-131 kubelet[27411]: I0416 01:35:11.466678 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:35:24 host-11-1-1-131 kubelet[27411]: I0416 01:35:24.552332 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5175a141dcd57d969df30790cc8fe790c9de02113e56a55ff1021de6850f5bc3
Apr 16 01:35:24 host-11-1-1-131 kubelet[27411]: I0416 01:35:24.552871 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: fce37d85888b8d81733a09505c1a469e44958d02a2498517cd33d4eeb875266a
Apr 16 01:35:24 host-11-1-1-131 kubelet[27411]: E0416 01:35:24.553851 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:35:25 host-11-1-1-131 kubelet[27411]: I0416 01:35:25.561498 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: fce37d85888b8d81733a09505c1a469e44958d02a2498517cd33d4eeb875266a
Apr 16 01:35:25 host-11-1-1-131 kubelet[27411]: E0416 01:35:25.561999 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:35:36 host-11-1-1-131 kubelet[27411]: I0416 01:35:36.466336 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: fce37d85888b8d81733a09505c1a469e44958d02a2498517cd33d4eeb875266a
Apr 16 01:35:36 host-11-1-1-131 kubelet[27411]: E0416 01:35:36.466933 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
Apr 16 01:35:50 host-11-1-1-131 kubelet[27411]: I0416 01:35:50.467073 27411 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: fce37d85888b8d81733a09505c1a469e44958d02a2498517cd33d4eeb875266a
Apr 16 01:35:50 host-11-1-1-131 kubelet[27411]: E0416 01:35:50.469818 27411 pod_workers.go:191] Error syncing pod fee973aa24e6d51c26e210ab99143c53 ("kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-host-11-1-1-131_kube-system(fee973aa24e6d51c26e210ab99143c53)"
The text was updated successfully, but these errors were encountered: