Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KIC: crio runtime doesn't start with podman driver, but does start with docker driver #10649

Closed
afbjorklund opened this issue Feb 28, 2021 · 7 comments
Assignees
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 28, 2021

docker (20.10.4)

$ minikube start --driver=docker --container-runtime=cri-o
πŸ˜„  minikube v1.18.0-beta.0 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
πŸ”₯  Creating docker container (CPUs=2, Memory=7900MB) ...
🎁  Preparing Kubernetes v1.20.2 on CRI-O 1.20.0 ...
πŸ”—  Configuring CNI (Container Networking Interface) ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ’‘  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

cni.go:121] "docker" driver + crio runtime found, recommending kindnet

podman (3.0.1)

$ minikube start --driver=podman --container-runtime=cri-o
πŸ˜„ minikube v1.18.0-beta.0 on Ubuntu 20.04
✨  Using the podman (experimental) driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating podman container (CPUs=2, Memory=7900MB) ...
🎁  Preparing Kubernetes v1.20.2 on CRI-O 1.20.0 ...
πŸ’’  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:

stderr:
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-66-generic\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...

cni.go:121] "podman" driver + crio runtime found, recommending kindnet

The images are loaded successfully, but there are no signs of any containers being started.

Running containers with podman does work, so this is something specific to crio-in-podman

Note:

  1. Running docker driver and cri-o runtime works OK.
  2. Running podman driver and containerd runtime works OK.
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Feb 28, 2021

Could be related to the cgroups handling, from the latest kindbase:

Failed to start ContainerManager failed to initialize top level QOS containers: root container [kubepods] doesn't exist

UPDATE: Tried gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4

But it seems to have the same issue, so maybe it is not the entrypoint...

@afbjorklund afbjorklund added co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. labels Feb 28, 2021
@afbjorklund
Copy link
Collaborator Author

Addition: it works fine with podman2, so it is yet another regression with podman3.

Unfortunately podman 2.2.1 is no longer available, only podman 3.0.1 is provided...

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Feb 28, 2021

There is some fundamental difference in how the kubelet behaves...

The old podman is showing like:

GET https://control-plane.minikube.internal:8443/healthz?timeout=10s

Response Status: 500 Internal Server Error in 4 milliseconds
Response Status: 500 Internal Server Error in 3 milliseconds
Response Status: 500 Internal Server Error in 2 milliseconds
Response Status: 200 OK in 3 milliseconds

But the new podman is instead:

GET https://control-plane.minikube.internal:8443/healthz?timeout=10s

Response Status: in 0 milliseconds
Response Status: in 0 milliseconds
Response Status: in 1 milliseconds
Response Status: in 1 milliseconds

And eventually it just times out. round_trippers.go will need debugging.


There are no differences, before the Waiting for the API server to be healthy
So there is something wrong with the networking, in this new Podman version.

curl: (7) Failed to connect to control-plane.minikube.internal port 8443: Connection refused

It's trying to contact the host, instead of the apiserver. UPDATE: apparently, that was me.
(having stuff configured in /etc/hosts that was then copied to the container configuration)

But the log is the same for both:
level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
level=info msg="Update default CNI network name to crio"

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Feb 28, 2021

Getting a different error now:

Error adding network: failed to allocate for range 0: requested IP address 192.168.49.2 is not available in range set 192.168.49.1-192.168.49.254

Apparently there can be leftovers

/var/lib/cni/networks/minikube/last_reserved_ip.0:192.168.49.2

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Mar 1, 2021

The current workaround is to change from systemd to cgroups in the crio.conf, that makes it start again.

# Cgroup setting for conmon
conmon_cgroup = "system.slice"

# Cgroup management implementation used for the runtime.
cgroup_manager = "systemd"
# Cgroup setting for conmon
conmon_cgroup = "pod"

# Cgroup management implementation used for the runtime.
cgroup_manager = "cgroupfs"

And then a sudo systemctl restart crio and a kubeadm reset to clear out the old hanging state.

 Run: sudo pgrep -xnf kube-apiserver.*minikube.*
 Run: sudo pgrep -xnf kube-apiserver.*minikube.*
 Run: sudo pgrep -xnf kube-apiserver.*minikube.*

Note that the host podman was running systemd, and kubernetes was configured with systemd.
(minikube will automatically copy the setting from the container runtime, to the kubelet config)

@afbjorklund afbjorklund added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Mar 1, 2021
@medyagh
Copy link
Member

medyagh commented Mar 1, 2021

I am not sure if related but

for example in this PR

https://github.com/kubernetes/minikube/pull/10613


Docker_Linux_crio Jenkins: completed with 14 / 121 failures in 70.00 minute(s).
Details

https://storage.googleapis.com/minikube-builds/logs/10613/e08ac1c/Docker_Linux_crio.html

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Apr 15, 2021

Seems to be working OK with podman 3.1.0, so everyone is encouraged to upgrade (or downgrade)

https://podman.io/blogs/2021/03/02/podman-support-for-older-distros.html

Podman 3.0 will be the last major build on CentOS 7, Debian 10 and Ubuntu 18.04.

That is, use either podman 2.2.1 or podman 3.1.0 - but not 2.0 (2.1) or 3.0 versions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

3 participants