-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set the --force-systemd true or false automatically (by detecting the cgroups) #8348
Comments
How will this affect those Linux distributions that do not support/require I for one, changed distros just to get rid of This seems to work fine as long as I don't try to use Docker to run anything else after a $ docker run --rm -it alpine:3.12 /bin/sh
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown. Before running Actually, I cannot even stop and start For the record, I have |
Hey @paddy-hack that's an interesting setup and would be important to explore before we set Just to clarify, this sets docker within the minikube VM to use systemd as cgroup manager (we already have systemd running in minikube). Does running:
work on your machine? And could you provide the output of |
@paddy-hack I agree with @priyawadhwa this would be for the systemd inside minikube but that is still a good point we need to ensure minikube is capable of running that cgroup inside that setup as well. is there a way you can try and see if that doesnt work for you we can handle it on minikube? |
Replying to #6954, I had already gone through a minikube start
minikube status
minikube stop
minikube start but after that I got paddy-hack@boson:~$ minikube start --force-systemd
😄 minikube v1.11.0 on Debian 10.0
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🤦 StartHost failed, but will try again: driver start: start: docker start minikube: exit status 1
stdout:
stderr:
Error response from daemon: OCI runtime create failed: container with id exists: 53ac2f88bff8b8ea2db5cd4e9a3133ea9637cc8bd2e59c550008fba242ed74a7: unknown
Error: failed to start containers: minikube
🔄 Restarting existing docker container for "minikube" ...
😿 Failed to start docker container. "minikube start" may fix it: driver start: start: docker start minikube: exit status 1
stdout:
stderr:
Error response from daemon: OCI runtime create failed: container with id exists: 53ac2f88bff8b8ea2db5cd4e9a3133ea9637cc8bd2e59c550008fba242ed74a7: unknown
Error: failed to start containers: minikube
💣 error provisioning host: Failed to start host: driver start: start: docker start minikube: exit status 1
stdout:
stderr:
Error response from daemon: OCI runtime create failed: container with id exists: 53ac2f88bff8b8ea2db5cd4e9a3133ea9637cc8bd2e59c550008fba242ed74a7: unknown
Error: failed to start containers: minikube
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose Restarting the Here's the paddy-hack@boson:~$ docker info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 39
Server Version: 19.03.11
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.19.0-9-amd64
Operating System: Devuan GNU/Linux 3 (beowulf)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.608GiB
Name: boson
ID: FFPZ:6IG2:WOZN:WC5L:ZZWQ:4VUO:BNKJ:UX6G:SYNW:ASKJ:GBCJ:VF5K
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support |
Rebooted and tried again paddy-hack@boson:~$ minikube start --force-systemd
😄 minikube v1.11.0 on Debian 10.0
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
💡 For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
paddy-hack@boson:~$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
paddy-hack@boson:~$ minikube stop
✋ Stopping "minikube" in docker ...
🛑 Powering off "minikube" via SSH ...
🛑 Node "minikube" stopped.
paddy-hack@boson:~$ minikube start --force-systemd
😄 minikube v1.11.0 on Debian 10.0
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🤦 StartHost failed, but will try again: driver start: start: docker start minikube: exit status 1
stdout:
stderr:
Error response from daemon: cgroups: cannot find cgroup mount destination: unknown
Error: failed to start containers: minikube
🔄 Restarting existing docker container for "minikube" ...
😿 Failed to start docker container. "minikube start" may fix it: driver start: start: docker start minikube: exit status 1
stdout:
stderr:
Error response from daemon: OCI runtime create failed: container with id exists: 53ac2f88bff8b8ea2db5cd4e9a3133ea9637cc8bd2e59c550008fba242ed74a7: unknown
Error: failed to start containers: minikube
💣 error provisioning host: Failed to start host: driver start: start: docker start minikube: exit status 1
stdout:
stderr:
Error response from daemon: OCI runtime create failed: container with id exists: 53ac2f88bff8b8ea2db5cd4e9a3133ea9637cc8bd2e59c550008fba242ed74a7: unknown
Error: failed to start containers: minikube
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose |
But getting back to this reliance on |
Hey @paddy-hack -- just to clarify, minikube does use systemd but only within the running VM or container (you don't need systemd on your machine). The In terms of the error you're getting from docker, it's a known docker issue on Linux: which a temporary solution mentioned in this comment: |
The key thing here is to use the same cgroup driver. The minikube VM is using systemd, so then it makes sense to have Docker use |
Also, currently systemd-in-systemd is broken in podman so it has no choice but to run cgroupfs...
|
I tested with Devuan Beowulf. Can confirm that trying to start minikube with the docker driver messes up docker (like above). Probably something with the entrypoint
Run: Beyond the extra "docker" layer, we also have some cgroups v2 compat created:
Anyway, since kicbase uses systemd (through KIND) it seems it fails on cgroupfs. As the article above implies, mixing and matching different init is asking for trouble. So these systems (Devuan) will need to use Or with a dedicated VM for it, maybe |
If anyone wants to look into this further, the message is from containerd on https://github.com/containerd/cgroups/blob/master/utils.go#L340 It doesn't seem so happy about the new "name=systemd" cgroup from Similar to moby/moby#38822 This also means that this is the workaround, to get Docker back (without reboot):
If this is acceptable, then this is the way to run minikube with Docker-in-Docker |
Guess I'll be using |
We should add a solution message, when trying to use docker driver without systemd cgroup. The user doesn't actually need to run systemd as their PID 1 nor any daemons or units, though. |
That should still work, note that you need |
Thanks for the heads up on |
At one point we considered renaming the driver from docker-machine-kvm to docker-machine-libvirt-driver, but at that point it was probably "too late" and the historical name won. Now forked as The qemu (with kvm) driver has some issues with creating the networks for kubernetes, so it works better in a simpler docker context. So that's why we are using the (system) libvirt wrapper instead... https://libvirt.org/drvqemu.html
It should warn about it. (#5617) |
we need to figure out the mac os user's best default by finding out Docker's implemenation of their VM. and for github actions minikube should autodetect it is using github actions ( there is an environment varialble) |
@afbjorklund susggests enabling kuberentes on docker on dekstop and see what why are using. |
maybe we can exec into the Docker machine created by docker desktop and see what cgroup it uses
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@govargo would you be interested in looking at this? |
It may take some times because I'm not good at this point. |
HI, I want to ask when I use |
Look into if we should be setting
--force-systemd=true
by default, and if this results in any performance improvementdocumentation says we need to use same as your system
if your system uses systemd, you should use systemd
The text was updated successfully, but these errors were encountered: