Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix minikube start && stop && start in Cloud Shell: ubepods.slice already exists." #12232

Closed
spowelljr opened this issue Aug 10, 2021 · 4 comments · Fixed by #12237
Closed

Fix minikube start && stop && start in Cloud Shell: ubepods.slice already exists." #12232

spowelljr opened this issue Aug 10, 2021 · 4 comments · Fixed by #12237
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@spowelljr
Copy link
Member

spowelljr commented Aug 10, 2021

Error Message

Aug 16 21:40:19 minikube kubelet[8019]: E0816 21:40:19.352054 8019 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists." 

Running:

minikube start
minikube stop
minikube start

In Cloud Shell, results in the following failure on the second start.

File: lastStart.txt

@spowelljr spowelljr added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Aug 10, 2021
@medyagh
Copy link
Member

medyagh commented Aug 11, 2021

based on kubernetes/kubernetes#43704 I found a kubelet option that if we set it it will fix the issue

minikube start --extra-config=kubelet.cgroups-per-qos=false --extra-config=kubelet.enforce-node-allocatable=""

the documentation for kubelet says

--cgroups-per-qos     Default: true

--
  | Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

--enforce-node-allocatable strings     Default: pods



--
  | A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are none, pods, system-reserved, and kube-reserved. If the latter two options are specified, --system-reserved-cgroup and --kube-reserved-cgroup must also be set, respectively. If none is specified, no additional options should be set. See https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ for more details. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)


this is for any case that minikube in reality is running inside a "container" without root access level to the cgrouphttps://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

@medyagh
Copy link
Member

medyagh commented Aug 11, 2021

we should add this as the solution message whenever minikube fails this way

@medyagh
Copy link
Member

medyagh commented Aug 16, 2021

for record here is a full log including exit

$ minikube start
* minikube v1.22.0 on Debian 10.10 (amd64)
  - MINIKUBE_FORCE_SYSTEMD=true
  - MINIKUBE_HOME=/google/minikube
  - MINIKUBE_WANTUPDATENOTIFICATION=false
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
X Problems detected in kubelet:
  Aug 16 21:40:15 minikube kubelet[7575]: E0816 21:40:15.634392    7575 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:17 minikube kubelet[7799]: E0816 21:40:17.545005    7799 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:19 minikube kubelet[8019]: E0816 21:40:19.352054    8019 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:21 minikube kubelet[8239]: E0816 21:40:21.090448    8239 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:22 minikube kubelet[8460]: E0816 21:40:22.914457    8460 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:40:26 minikube kubelet[9056]: E0816 21:40:26.597584    9056 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:28 minikube kubelet[9268]: E0816 21:40:28.347722    9268 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:30 minikube kubelet[9481]: E0816 21:40:30.084690    9481 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:31 minikube kubelet[9695]: E0816 21:40:31.879298    9695 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:34 minikube kubelet[9906]: E0816 21:40:34.023765    9906 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:40:37 minikube kubelet[10529]: E0816 21:40:37.580415   10529 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:39 minikube kubelet[10737]: E0816 21:40:39.308429   10737 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:41 minikube kubelet[10947]: E0816 21:40:41.033949   10947 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:42 minikube kubelet[11164]: E0816 21:40:42.895045   11164 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:44 minikube kubelet[11375]: E0816 21:40:44.796797   11375 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:40:50 minikube kubelet[12187]: E0816 21:40:50.610401   12187 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:52 minikube kubelet[12407]: E0816 21:40:52.584045   12407 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:54 minikube kubelet[12616]: E0816 21:40:54.318563   12616 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:56 minikube kubelet[12832]: E0816 21:40:56.003337   12832 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:40:57 minikube kubelet[13045]: E0816 21:40:57.944055   13045 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:41:03 minikube kubelet[13863]: E0816 21:41:03.554090   13863 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:05 minikube kubelet[14071]: E0816 21:41:05.336456   14071 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:07 minikube kubelet[14277]: E0816 21:41:07.072371   14277 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:08 minikube kubelet[14490]: E0816 21:41:08.802950   14490 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:10 minikube kubelet[14697]: E0816 21:41:10.720231   14697 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:41:14 minikube kubelet[15303]: E0816 21:41:14.311016   15303 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:16 minikube kubelet[15514]: E0816 21:41:16.113166   15514 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:18 minikube kubelet[15733]: E0816 21:41:18.043799   15733 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:19 minikube kubelet[15951]: E0816 21:41:19.821076   15951 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:21 minikube kubelet[16174]: E0816 21:41:21.624864   16174 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:41:25 minikube kubelet[16778]: E0816 21:41:25.619189   16778 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:27 minikube kubelet[16990]: E0816 21:41:27.621756   16990 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:29 minikube kubelet[17202]: E0816 21:41:29.548271   17202 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:31 minikube kubelet[17412]: E0816 21:41:31.374234   17412 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:33 minikube kubelet[17628]: E0816 21:41:33.402311   17628 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:41:37 minikube kubelet[18226]: E0816 21:41:37.394395   18226 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:39 minikube kubelet[18440]: E0816 21:41:39.304983   18440 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:41 minikube kubelet[18650]: E0816 21:41:41.015097   18650 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:42 minikube kubelet[18868]: E0816 21:41:42.809970   18868 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:44 minikube kubelet[19087]: E0816 21:41:44.594972   19087 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:41:50 minikube kubelet[19919]: E0816 21:41:50.030856   19919 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:51 minikube kubelet[20127]: E0816 21:41:51.823976   20127 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:53 minikube kubelet[20337]: E0816 21:41:53.604900   20337 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:55 minikube kubelet[20550]: E0816 21:41:55.347425   20550 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:41:57 minikube kubelet[20762]: E0816 21:41:57.193839   20762 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:42:02 minikube kubelet[21572]: E0816 21:42:02.804725   21572 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:04 minikube kubelet[21785]: E0816 21:42:04.551314   21785 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:06 minikube kubelet[22000]: E0816 21:42:06.312781   22000 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:08 minikube kubelet[22217]: E0816 21:42:08.093020   22217 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:09 minikube kubelet[22435]: E0816 21:42:09.952011   22435 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:42:13 minikube kubelet[23049]: E0816 21:42:13.889007   23049 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:15 minikube kubelet[23260]: E0816 21:42:15.807706   23260 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:17 minikube kubelet[23478]: E0816 21:42:17.514684   23478 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:19 minikube kubelet[23691]: E0816 21:42:19.342882   23691 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:21 minikube kubelet[23906]: E0816 21:42:21.168214   23906 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:42:26 minikube kubelet[24733]: E0816 21:42:26.801972   24733 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:28 minikube kubelet[24946]: E0816 21:42:28.595132   24946 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:30 minikube kubelet[25162]: E0816 21:42:30.340822   25162 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:32 minikube kubelet[25376]: E0816 21:42:32.032324   25376 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:33 minikube kubelet[25588]: E0816 21:42:33.985209   25588 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:42:39 minikube kubelet[26411]: E0816 21:42:39.362986   26411 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:41 minikube kubelet[26624]: E0816 21:42:41.332561   26624 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:43 minikube kubelet[26843]: E0816 21:42:43.095227   26843 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:44 minikube kubelet[27054]: E0816 21:42:44.843867   27054 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:46 minikube kubelet[27266]: E0816 21:42:46.694191   27266 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:42:52 minikube kubelet[28092]: E0816 21:42:52.086326   28092 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:53 minikube kubelet[28304]: E0816 21:42:53.814098   28304 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:55 minikube kubelet[28514]: E0816 21:42:55.533897   28514 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:57 minikube kubelet[28726]: E0816 21:42:57.357927   28726 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:42:59 minikube kubelet[29072]: E0816 21:42:59.368307   29072 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
X Problems detected in kubelet:
  Aug 16 21:43:03 minikube kubelet[29555]: E0816 21:43:03.107441   29555 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:43:04 minikube kubelet[29766]: E0816 21:43:04.836759   29766 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:43:06 minikube kubelet[29975]: E0816 21:43:06.631127   29975 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:43:08 minikube kubelet[30184]: E0816 21:43:08.606798   30184 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
  Aug 16 21:43:10 minikube kubelet[30394]: E0816 21:43:10.392855   30394 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
  - Generating certificates and keys ...
  - Booting up control plane ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'

stderr:
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

  • Generating certificates and keys ...
  • Booting up control plane ...

X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'

stderr:
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭──────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please attach the following file to the GitHub issue: │
│ * - /google/minikube/.minikube/logs/lastStart.txt │
│ │
╰──────────────────────────────────────────────────────────────────╯
X Problems detected in kubelet:
Aug 16 21:47:12 minikube kubelet[59077]: E0816 21:47:12.785731 59077 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
Aug 16 21:47:14 minikube kubelet[59290]: E0816 21:47:14.505064 59290 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."
Aug 16 21:47:16 minikube kubelet[59504]: E0816 21:47:16.351240 59504 kubelet.go:1384] "Failed to start ContainerManager" err="Unit kubepods.slice already exists."

X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'

stderr:
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

@medyagh
Copy link
Member

medyagh commented Aug 16, 2021

I see upstream issue that could be in the same area of problem

kubernetes/kubernetes#104280
kubernetes/kubernetes#102250

I tried a 1.20.0 k8s version to see if that would fix cloud shell but that introduced a different problem

`Failed to start ContainerManager" err="failed to initialize top level QOS containers: root container [kubepods] doesn't exist`

so while they could be in same area of issues, they dont seem to be same

@medyagh medyagh changed the title Cannot minikube start && stop && start in Cloud Shell Fix minikube start && stop && start in Cloud Shell: ubepods.slice already exists." Aug 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants