-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
driver=podman, container-engine=docker, restart stopped minikube OCI runtime error #7996
Comments
Possibly related to containers/podman#4481 and migration issues from cgroups v2 EDIT: Nope, seems like only the error was the same. Easy to reproduce this (stop and start)
|
But it does seem like So it might be something related after all, though more like systemd vs cgroupfs than v1 vs v2 ? Seems to be more like an accident, but anyway. The usual start was:
This start now is:
So this failure is somehow more acceptable to it. |
This bug seems reproducible with |
The code says: // to run nested container from privileged container in podman https://bugzilla.redhat.com/show_bug.cgi?id=1687713
if ociBin == Podman {
args = append(args, "--cgroup-manager", "cgroupfs")
} So if we are going to use that workaround, we need it for "podman start" as well... Because on fedora, the default is systemd. But it seems it doesn't work for DIND:
On a side note, the error reporting code is broken. There is an |
The bug is with the driver runtime, so independent on the inner runtime. |
Hello @afbjorklund I've tried your tree, and I've encountered the same issue:
|
That is weird, where did the "--cgroup-manager cgroupfs" go ? as in: |
It is not in your output: 🤦 StartHost failed, but will try again: driver start: start: sudo podman start minikube: exit status 125 |
I've added
|
In any case I've tried manually restarting the minikube container, but with no better results (though it fails with a more talking error):
|
I think you also want to increase verbosity:
But the error looked different this time ? |
I know nothing but what the console tells me :(
And before you ask:
|
Looks happy enough now This is just an rpm packaging issue: |
Unfortunately it did nothing :(
|
New fact: now it seems it's not "hard" failing, but still there's something to do with systemd cgroup, this is the minikube's container log (from when I try to wake it up):
It seems that, even if the cgroup-manager is set to cgroupfs, minikube's image is still trying to use systemd? |
Another fact: the previous logs are with
|
See #8033 (comment) and #8033 (comment) for why systemd isn't starting when running with podman |
Minikube is able to start once. When stopped, podman won't be able to turn it on again due to an
OCI runtime error
. This seems to happen only on minikube (i.e. no problems stopping and starting other services).Steps to reproduce the issue:
minikube start --driver=podman
minikube stop
minikube start
Full output of failed command:
minikube start
with--alsologtostderr
option:sudo podman start minikube
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:Thank you for your awesome work :)
The text was updated successfully, but these errors were encountered: