Skip to content

none on Fedora 30: failed to get the kubelet's cgroup: cpu and memory cgroup hierarchy not unified #5127

Closed
@lehh

Description

@lehh

The exact command to reproduce the issue:
After installing minikube I ran:
sudo minikube config set vm-driver none
sudo minikube start

The full output of the command that failed
😄  minikube v1.3.1 on Fedora 30
🤹  Running on localhost (CPUs=6, Memory=7955MB, Disk=25070MB) ...
ℹ️   OS release is Fedora 30 (Workstation Edition)
🐳  Preparing Kubernetes v1.15.2 on Docker 1.13.1 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
🚜  Pulling images ...
🚀  Launching Kubernetes ... 

💣 Error starting cluster: cmd failed: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 192.168.1.1:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new/choose

The output of the "minikube logs" command
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.212979339-03:00" level=warning msg="failed to retrieve docker-runc version: unknown output format: runc version 1.0.0-rc2\nspec: 1.0.0-rc2-dev\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.214861674-03:00" level=warning msg="failed to retrieve docker-init version: unknown output format: tini version 0.18.0\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.308356033-03:00" level=warning msg="failed to retrieve docker-runc version: unknown output format: runc version 1.0.0-rc2\nspec: 1.0.0-rc2-dev\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.308980547-03:00" level=warning msg="failed to retrieve docker-init version: unknown output format: tini version 0.18.0\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.340250132-03:00" level=warning msg="failed to retrieve docker-runc version: unknown output format: runc version 1.0.0-rc2\nspec: 1.0.0-rc2-dev\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.340833984-03:00" level=warning msg="failed to retrieve docker-init version: unknown output format: tini version 0.18.0\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.497722794-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-apiserver:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-apiserver:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.498051114-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-apiserver:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-apiserver:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.524744388-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-controller-manager:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-controller-manager:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.525018153-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-controller-manager:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-controller-manager:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.549290068-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-scheduler:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-scheduler:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.549570271-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-scheduler:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-scheduler:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.571476664-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-proxy:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-proxy:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.571820593-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/kube-proxy:v1.15.2/json returned error: No such container: k8s.gcr.io/kube-proxy:v1.15.2"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.595521785-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/pause:3.1/json returned error: No such container: k8s.gcr.io/pause:3.1"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.595836927-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/pause:3.1/json returned error: No such container: k8s.gcr.io/pause:3.1"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.618804200-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/etcd:3.3.10/json returned error: No such container: k8s.gcr.io/etcd:3.3.10"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.619174668-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/etcd:3.3.10/json returned error: No such container: k8s.gcr.io/etcd:3.3.10"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.641446281-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/coredns:1.3.1/json returned error: No such container: k8s.gcr.io/coredns:1.3.1"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.641819171-03:00" level=error msg="Handler for GET /v1.26/containers/k8s.gcr.io/coredns:1.3.1/json returned error: No such container: k8s.gcr.io/coredns:1.3.1"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.713819391-03:00" level=warning msg="failed to retrieve docker-runc version: unknown output format: runc version 1.0.0-rc2\nspec: 1.0.0-rc2-dev\n"
Aug 18 19:09:41 Fedora dockerd-current[2930]: time="2019-08-18T19:09:41.714342708-03:00" level=warning msg="failed to retrieve docker-init version: unknown output format: tini version 0.18.0\n"

==> container status <==
sudo: crictl: command not found
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

==> dmesg <==
[ +0.000009] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x378901
[ +7.628119] radeon_dp_aux_transfer_native: 284 callbacks suppressed
[Aug18 16:14] radeon_dp_aux_transfer_native: 32 callbacks suppressed
[ +38.971214] snd_hdac_bus_update_rirb: 450 callbacks suppressed
[ +0.000007] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x770740
[ +0.000005] snd_hda_intel 0000:01:00.1: spurious response 0x200:0x0, last cmd=0xb70740
[ +0.000004] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x377200
[ +0.000014] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x578901
[ +0.000003] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x777200
[ +0.000003] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x778901
[ +0.000003] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x977200
[ +0.000003] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x978901
[ +0.000003] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0xb77200
[ +0.000003] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0xb78901
[ +1.262119] radeon_dp_aux_transfer_native: 32 callbacks suppressed
[ +5.352300] radeon_dp_aux_transfer_native: 74 callbacks suppressed
[ +8.373222] radeon_dp_aux_transfer_native: 32 callbacks suppressed
[Aug18 16:33] radeon_dp_aux_transfer_native: 32 callbacks suppressed
[ +0.687724] hrtimer: interrupt took 2998022 ns
[ +1.614347] snd_hdac_bus_update_rirb: 444 callbacks suppressed
[ +0.000010] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x970740
[ +0.000010] snd_hda_intel 0000:01:00.1: spurious response 0x600:0x0, last cmd=0x377200
[ +0.000010] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x578901
[ +0.000028] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0xd78901
[ +0.000008] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x270e00
[ +0.000004] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x370100
[ +0.000007] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x470d01
[ +0.000006] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x470e00
[ +0.000007] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x570882
[ +0.000007] snd_hda_intel 0000:01:00.1: spurious response 0x0:0x0, last cmd=0x670e00

==> kernel <==
19:18:41 up 3:06, 1 user, load average: 0.83, 0.66, 0.62
Linux Fedora 5.1.16-300.fc30.x86_64 #1 SMP Wed Jul 3 15:06:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Fedora 30 (Workstation Edition)"

==> kubelet <==
-- Logs begin at Mon 2018-12-03 20:05:46 -02, end at Sun 2019-08-18 19:18:41 -03. --
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:40 Fedora kubelet[23073]: I0818 19:18:40.553689 23073 server.go:425] Version: v1.15.2
Aug 18 19:18:40 Fedora kubelet[23073]: I0818 19:18:40.554024 23073 plugins.go:103] No cloud provider specified.
Aug 18 19:18:40 Fedora kubelet[23073]: F0818 19:18:40.559978 23073 server.go:273] failed to run Kubelet: failed to get the kubelet's cgroup: cpu and memory cgroup hierarchy not unified. cpu: /, memory: /system.slice/kubelet.service
Aug 18 19:18:40 Fedora systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Aug 18 19:18:40 Fedora systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug 18 19:18:41 Fedora systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart.
Aug 18 19:18:41 Fedora systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 718.
Aug 18 19:18:41 Fedora systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Aug 18 19:18:41 Fedora systemd[1]: Started kubelet: The Kubernetes Node Agent.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Aug 18 19:18:41 Fedora kubelet[23165]: I0818 19:18:41.298262 23165 server.go:425] Version: v1.15.2
Aug 18 19:18:41 Fedora kubelet[23165]: I0818 19:18:41.298624 23165 plugins.go:103] No cloud provider specified.
Aug 18 19:18:41 Fedora kubelet[23165]: F0818 19:18:41.306308 23165 server.go:273] failed to run Kubelet: failed to get the kubelet's cgroup: cpu and memory cgroup hierarchy not unified. cpu: /, memory: /system.slice/kubelet.service
Aug 18 19:18:41 Fedora systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Aug 18 19:18:41 Fedora systemd[1]: kubelet.service: Failed with result 'exit-code'.

The operating system version: Fedora 30 Workstation.

I configured the user permissions as stated by @afbjorklund on issue #5099 but I'm still getting errors.
I opened a new issue since I'm using Fedora and the error seems different from the later mentioned issue.
Obs: Running minikube using the virtualbox driver worked without problems.

Metadata

Metadata

Assignees

No one assigned

    Labels

    co/kubeletKubelet config issuesco/none-driverkind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.priority/awaiting-more-evidenceLowest priority. Possibly useful, but not yet enough support to actually get it done.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions