Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KinD cannot create cluster in rootless Podman #3234

Open
hadrabap opened this issue May 17, 2023 · 9 comments
Open

KinD cannot create cluster in rootless Podman #3234

hadrabap opened this issue May 17, 2023 · 9 comments
Labels
area/provider/podman Issues or PRs related to podman kind/bug Categorizes issue or PR as related to a bug.

Comments

@hadrabap
Copy link

What happened:

Hello friends,

As I have great success with KinD with Docker Desktop on Intel Mac, it has been my first choice to use it on my Linux box. Unfortunately I'm unable to create cluster.

I've been poking around and found an interesting issue, which has similar symptoms as I'm experiencing—#3061.

In short—details below—the kind create cluster --config config.yaml -v 9999 --retain fails with

I0517 08:58:20.637747     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds

I found out that kube apiserver is not running (hence the 6443 port is not listening).

There are two issues which caught my attention:

  1. iptables related stuff—the log suggests kernel upgrade or manual insmod ip_* and ip6_*.
  2. In-container containerd is unable to mount layers, very similar to the kind create cluster fails on MacOS + docker desktop #3061 issue.

What you expected to happen:

The cluster spins-up and is ready to use.

How to reproduce it (as minimally and precisely as possible):

[opc@sws ~]$ cat > config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: KubeletConfiguration
    cgroupDriver: systemd
[opc@sws ~]$ kind create cluster --config config.yaml -v 9999 --retain
enabling experimental podman provider
Creating cluster "kind" ...
DEBUG: podman/images.go:58] Image: docker.io/kindest/node@sha256:61b92f38dff6ccc29969e7aa154d34e38b89443af1a2c14e6cfbd2df6419c66f present locally
 ✓ Ensuring node image (kindest/node:v1.26.3) 🖼 
 ✓ Preparing nodes 📦  
DEBUG: config/config.go:96] Using the following kubeadm config for node kind-control-plane:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
  extraArgs:
    feature-gates: KubeletInUserNamespace=true
    runtime-config: ""
apiVersion: kubeadm.k8s.io/v1beta3
clusterName: kind
controlPlaneEndpoint: kind-control-plane:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    feature-gates: KubeletInUserNamespace=true
kind: ClusterConfiguration
kubernetesVersion: v1.26.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler:
  extraArgs:
    feature-gates: KubeletInUserNamespace=true
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.89.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 10.89.0.2
    node-labels: ""
    provider-id: kind://podman/kind/kind-control-plane
---
apiVersion: kubeadm.k8s.io/v1beta3
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 10.89.0.2
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    node-ip: 10.89.0.2
    node-labels: ""
    provider-id: kind://podman/kind/kind-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
cgroupRoot: /kubelet
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
failSwapOn: false
featureGates:
  KubeletInUserNamespace: true
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
localStorageCapacityIsolation: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
  maxPerCore: 0
  tcpCloseWaitTimeout: 0s
  tcpEstablishedTimeout: 0s
featureGates:
  KubeletInUserNamespace: true
iptables:
  minSyncPeriod: 1s
kind: KubeProxyConfiguration
mode: iptables
 ✓ Writing configuration 📜 
DEBUG: kubeadminit/init.go:82] I0517 08:54:50.815913     168 initconfiguration.go:254] loading configuration from "/kind/kubeadm.conf"
W0517 08:54:50.817418     168 initconfiguration.go:331] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.26.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0517 08:54:50.826283     168 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0517 08:54:50.887858     168 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.89.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0517 08:54:51.142803     168 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0517 08:54:51.235186     168 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0517 08:54:51.322791     168 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0517 08:54:51.500051     168 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [10.89.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [10.89.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0517 08:54:52.069688     168 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0517 08:54:52.154642     168 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0517 08:54:52.280659     168 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0517 08:54:52.419450     168 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0517 08:54:52.560682     168 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0517 08:54:52.744359     168 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0517 08:54:53.129572     168 manifests.go:99] [control-plane] getting StaticPodSpecs
I0517 08:54:53.130248     168 certs.go:519] validating certificate period for CA certificate
I0517 08:54:53.130305     168 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0517 08:54:53.130311     168 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0517 08:54:53.130315     168 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0517 08:54:53.130318     168 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0517 08:54:53.130322     168 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0517 08:54:53.132226     168 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0517 08:54:53.132237     168 manifests.go:99] [control-plane] getting StaticPodSpecs
I0517 08:54:53.132423     168 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0517 08:54:53.132429     168 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0517 08:54:53.132433     168 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0517 08:54:53.132436     168 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0517 08:54:53.132439     168 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0517 08:54:53.132442     168 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0517 08:54:53.132445     168 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0517 08:54:53.133089     168 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0517 08:54:53.133097     168 manifests.go:99] [control-plane] getting StaticPodSpecs
I0517 08:54:53.133248     168 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0517 08:54:53.133655     168 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0517 08:54:53.134286     168 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0517 08:54:53.134293     168 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0517 08:54:53.134738     168 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0517 08:54:53.136850     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:54:53.637915     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:54:54.138308     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:54:54.638213     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
………
I0517 08:58:52.138148     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:58:52.638384     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:58:53.137429     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:58:53.137703     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:112
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:112
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
 ✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "podman exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0517 08:54:50.815913     168 initconfiguration.go:254] loading configuration from "/kind/kubeadm.conf"
W0517 08:54:50.817418     168 initconfiguration.go:331] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.26.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0517 08:54:50.826283     168 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0517 08:54:50.887858     168 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.89.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0517 08:54:51.142803     168 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0517 08:54:51.235186     168 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0517 08:54:51.322791     168 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0517 08:54:51.500051     168 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [10.89.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [10.89.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0517 08:54:52.069688     168 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0517 08:54:52.154642     168 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0517 08:54:52.280659     168 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0517 08:54:52.419450     168 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0517 08:54:52.560682     168 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0517 08:54:52.744359     168 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0517 08:54:53.129572     168 manifests.go:99] [control-plane] getting StaticPodSpecs
I0517 08:54:53.130248     168 certs.go:519] validating certificate period for CA certificate
I0517 08:54:53.130305     168 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0517 08:54:53.130311     168 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0517 08:54:53.130315     168 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0517 08:54:53.130318     168 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0517 08:54:53.130322     168 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0517 08:54:53.132226     168 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0517 08:54:53.132237     168 manifests.go:99] [control-plane] getting StaticPodSpecs
I0517 08:54:53.132423     168 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0517 08:54:53.132429     168 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0517 08:54:53.132433     168 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0517 08:54:53.132436     168 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0517 08:54:53.132439     168 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0517 08:54:53.132442     168 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0517 08:54:53.132445     168 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0517 08:54:53.133089     168 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0517 08:54:53.133097     168 manifests.go:99] [control-plane] getting StaticPodSpecs
I0517 08:54:53.133248     168 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0517 08:54:53.133655     168 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0517 08:54:53.134286     168 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0517 08:54:53.134293     168 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0517 08:54:53.134738     168 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0517 08:54:53.136850     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:54:53.637915     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:54:54.138308     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:54:54.638213     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
………
I0517 08:58:52.138148     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:58:52.638384     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:58:53.137429     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0517 08:58:53.137703     168 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:112
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:112
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1594
Stack Trace: 
sigs.k8s.io/kind/pkg/errors.WithStack
	sigs.k8s.io/kind/pkg/errors/errors.go:59
sigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run
	sigs.k8s.io/kind/pkg/exec/local.go:124
sigs.k8s.io/kind/pkg/cluster/internal/providers/podman.(*nodeCmd).Run
	sigs.k8s.io/kind/pkg/cluster/internal/providers/podman/node.go:146
sigs.k8s.io/kind/pkg/exec.CombinedOutputLines
	sigs.k8s.io/kind/pkg/exec/helpers.go:67
sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute
	sigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit/init.go:81
sigs.k8s.io/kind/pkg/cluster/internal/create.Cluster
	sigs.k8s.io/kind/pkg/cluster/internal/create/create.go:135
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create
	sigs.k8s.io/kind/pkg/cluster/provider.go:182
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
	sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:111
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
	sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:55
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.4.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.4.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.4.0/command.go:902
sigs.k8s.io/kind/cmd/kind/app.Run
	sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
	sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
	sigs.k8s.io/kind/main.go:25
runtime.main
	runtime/proc.go:250
runtime.goexit
	runtime/asm_amd64.s:1598

Anything else we need to know?:

  • The results are the same whenever the config.yaml is used or not.
  • There are no SELinux-related warnings, denials etc.

The iptables excerpt

May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.785062     223 cpu_manager.go:214] "Starting CPU manager" policy="none"
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.785073     223 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.785085     223 state_mem.go:36] "Initialized new in-memory state store"
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.786683     223 policy_none.go:49] "None policy: Start"
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.786988     223 memory_manager.go:169] "Starting memorymanager" policy="None"
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.787003     223 state_mem.go:35] "Initializing new in-memory state store"
May 17 08:54:53 kind-control-plane kubelet[223]: E0517 08:54:53.787237     223 kubelet_network_linux.go:83] "Failed to ensure that iptables hint chain exists" err=<
May 17 08:54:53 kind-control-plane kubelet[223]:         error creating chain "KUBE-IPTABLES-HINT": exit status 3: modprobe: ERROR: could not insert 'ip_tables': Operation not permitted
May 17 08:54:53 kind-control-plane kubelet[223]:         iptables v1.8.7 (legacy): can't initialize iptables table `mangle': Table does not exist (do you need to insmod?)
May 17 08:54:53 kind-control-plane kubelet[223]:         Perhaps iptables or your kernel needs to be upgraded.
May 17 08:54:53 kind-control-plane kubelet[223]:  >
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.787254     223 kubelet_network_linux.go:71] "Failed to initialize iptables rules; some functionality may be missing." protocol=IPv4
May 17 08:54:53 kind-control-plane kubelet[223]: E0517 08:54:53.792453     223 kubelet_network_linux.go:83] "Failed to ensure that iptables hint chain exists" err=<
May 17 08:54:53 kind-control-plane kubelet[223]:         error creating chain "KUBE-IPTABLES-HINT": exit status 3: modprobe: ERROR: could not insert 'ip6_tables': Operation not permitted
May 17 08:54:53 kind-control-plane kubelet[223]:         ip6tables v1.8.7 (legacy): can't initialize ip6tables table `mangle': Table does not exist (do you need to insmod?)
May 17 08:54:53 kind-control-plane kubelet[223]:         Perhaps ip6tables or your kernel needs to be upgraded.
May 17 08:54:53 kind-control-plane kubelet[223]:  >
May 17 08:54:53 kind-control-plane kubelet[223]: I0517 08:54:53.792463     223 kubelet_network_linux.go:71] "Failed to initialize iptables rules; some functionality may be missing." protocol=IPv6

The permissions-related excerpt

May 17 08:54:55 kind-control-plane kubelet[273]: E0517 08:54:55.826411     273 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-kind-control-plane_kube-system(f1916c7b1dd2b62f3784dbc1ff414e84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-kind-control-plane_kube-system(f1916c7b1dd2b62f3784dbc1ff414e84)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \\\"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/0538e73fb1f3bf28b7fe1a9064be9ff9cb444b95a28113ceee2dc86f841d9a37/resolv.conf\\\" to rootfs at \\\"/etc/resolv.conf\\\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/0538e73fb1f3bf28b7fe1a9064be9ff9cb444b95a28113ceee2dc86f841d9a37/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown\"" pod="kube-system/etcd-kind-control-plane" podUID=f1916c7b1dd2b62f3784dbc1ff414e84
May 17 08:54:55 kind-control-plane containerd[125]: time="2023-05-17T08:54:55.827685723+02:00" level=warning msg="cleanup warnings time=\"2023-05-17T08:54:55+02:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=492 runtime=io.containerd.runc.v2\ntime=\"2023-05-17T08:54:55+02:00\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
May 17 08:54:55 kind-control-plane containerd[125]: time="2023-05-17T08:54:55.828262839+02:00" level=error msg="copy shim log" error="read /proc/self/fd/24: file already closed"
May 17 08:54:55 kind-control-plane containerd[125]: time="2023-05-17T08:54:55.829300446+02:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-kind-control-plane,Uid:7383deaf095def706037bdaee8fbf8ea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown"
May 17 08:54:55 kind-control-plane kubelet[273]: E0517 08:54:55.829434     273 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown"
May 17 08:54:55 kind-control-plane kubelet[273]: E0517 08:54:55.829472     273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown" pod="kube-system/kube-controller-manager-kind-control-plane"
May 17 08:54:55 kind-control-plane kubelet[273]: E0517 08:54:55.829494     273 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown" pod="kube-system/kube-controller-manager-kind-control-plane"
May 17 08:54:55 kind-control-plane kubelet[273]: E0517 08:54:55.829551     273 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-kind-control-plane_kube-system(7383deaf095def706037bdaee8fbf8ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-kind-control-plane_kube-system(7383deaf095def706037bdaee8fbf8ea)\\\": rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \\\"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf\\\" to rootfs at \\\"/etc/resolv.conf\\\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/2f4d7d5405b1fe2a892c4138836b8c71d2cfdefa8cd100f8aa150f3e1b885484/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown\"" pod="kube-system/kube-controller-manager-kind-control-plane" podUID=7383deaf095def706037bdaee8fbf8ea

kind-logs.zip

Environment:

  • kind version: (use kind version):
[opc@sws ~]$ kind version
kind v0.18.0 go1.20.2 linux/amd64
  • Runtime info: (use docker info or podman info):
[opc@sws ~]$ podman info
host:
  arch: amd64
  buildahVersion: 1.27.3
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.4-1.module+el8.7.0+20930+90b24198.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.4, commit: 3922bff22a9c3ddaae27e66d280941f60a8b2554'
  cpuUtilization:
    idlePercent: 99.74
    systemPercent: 0.09
    userPercent: 0.17
  cpus: 128
  distribution:
    distribution: '"ol"'
    variant: server
    version: "8.7"
  eventLogger: file
  hostname: sws.swsnet.private-sandbox.net
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.15.0-101.103.2.1.el8uek.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 491621310464
  memTotal: 539922157568
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.1.4-1.module+el8.7.0+20930+90b24198.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.4
      spec: 1.0.2-dev
      go: go1.18.9
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.7.0+20930+90b24198.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 4h 3m 28.00s (Approximately 0.17 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - container-registry.oracle.com
  - docker.io
store:
  configFile: /home/opc/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/opc/.local/share/containers/storage
  graphRootAllocated: 23040677117952
  graphRootUsed: 13011298762752
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 20
  runRoot: /run/user/1000/containers
  volumePath: /home/opc/.local/share/containers/storage/volumes
version:
  APIVersion: 4.2.0
  Built: 1677014962
  BuiltTime: Tue Feb 21 22:29:22 2023
  GitCommit: ""
  GoVersion: go1.18.9
  Os: linux
  OsArch: linux/amd64
  Version: 4.2.0
  • OS (e.g. from /etc/os-release):
[opc@sws ~]$ cat /etc/os-release 
NAME="Oracle Linux Server"
VERSION="8.7"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.7"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.7"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:7:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.7
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.7
  • Kubernetes version: (use kubectl version):
[opc@sws ~]$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:21:19Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
The connection to the server localhost:8080 was refused - did you specify the right host or port?
  • Any proxies or other special environment settings?:

The system uses Oracle Unbreakable Kernel instead of RedHat one:

[opc@sws ~]$ uname -a
Linux sws.swsnet.private-sandbox.net 5.15.0-101.103.2.1.el8uek.x86_64 #2 SMP Mon May 1 20:11:30 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux

Next, the system is switched from CGroupsV1 to CGroupsV2 with delegation/propagation. That works without issues in other containers.

Attached logs: kind-logs.zip

@hadrabap hadrabap added the kind/bug Categorizes issue or PR as related to a bug. label May 17, 2023
@aojea
Copy link
Contributor

aojea commented May 17, 2023

have you followed the instructions in https://kind.sigs.k8s.io/docs/user/rootless/?

@hadrabap
Copy link
Author

Hello, thank you very much for quick response!

I tried installing iptables-related modules manually:

[root@sws netfilter]# modprobe ip_tables
[root@sws netfilter]# modprobe ip6_tables
[root@sws netfilter]# modprobe iptable_nat
[root@sws netfilter]# modprobe ip6table_nat

Next, I did

DOCKER_HOST=unix://${XDG_RUNTIME_DIR}/podman/podman.sock KIND_EXPERIMENTAL_PROVIDER=podman kind create cluster -v 9999 --retain

with the same results.

Well, the iptables-related complaints vanished, but the overall situation is the same.

I'm attaching new logs: kind-logs2.zip

@aojea
Copy link
Contributor

aojea commented May 17, 2023

containerd fails to create the pods

May 17 10:04:40 kind-control-plane containerd[125]: time="2023-05-17T10:04:40.425885361+02:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-kind-control-plane,Uid:7383deaf095def706037bdaee8fbf8ea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/9a8ceb3f28f5d6c1bc05b32c0dc0b9e20a70d24e9bc102b402b54bc1d4db4368/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/9a8ceb3f28f5d6c1bc05b32c0dc0b9e20a70d24e9bc102b402b54bc1d4db4368/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown"

maybe a problem with the storage?

@AkihiroSuda does this ring a bell?

@hadrabap
Copy link
Author

Sorry, I forgot to mention that all my filesystems are XFS only. If that helps…

@AkihiroSuda
Copy link
Member

mount /var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/9a8ceb3f28f5d6c1bc05b32c0dc0b9e20a70d24e9bc102b402b54bc1d4db4368/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown

This may work?

@juliobarreto
Copy link

With me I also can't upload the cluster with a normal user, I've tried everything and nothing. With sudo or ROOT it works normally.
kind create cluster --name k8s-kind-cl.md

@hadrabap
Copy link
Author

Hello friends!

The runc#3805 is merged in master. It looks like it is not intended for the 1.1 branch. But anyhow. That could not stop me from trying.

I built myself the master of runc at commit a6985522a6 and "patched" the official kindest:node like this:

Containerfile:

FROM kindest/node:v1.27.2@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72

RUN rm -f /usr/local/sbin/runc
COPY runc /usr/local/sbin/
RUN chmod +x /usr/local/sbin/runc

build.sh:

#!/bin/bash

podman build --rm \
        -f Containerfile \
        --squash \
        -t kindest/node:v1.27.2-runc

Finally I created an cluster:

[opc@sws runc-test]$ KIND_EXPERIMENTAL_PROVIDER=podman kind create cluster --image localhost/kindest/node:v1.27.2-runc 
using podman due to KIND_EXPERIMENTAL_PROVIDER
enabling experimental podman provider
Creating cluster "kind" ...
 ✓ Ensuring node image (localhost/kindest/node:v1.27.2-runc) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊

and a single test

[opc@sws runc-test]$ kubectl --context kind-kind get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-5d78c9869d-p84p7                     1/1     Running   0          9s
kube-system          coredns-5d78c9869d-q5zgz                     1/1     Running   0          9s
kube-system          etcd-kind-control-plane                      1/1     Running   0          25s
kube-system          kindnet-x8s45                                1/1     Running   0          10s
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          23s
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          23s
kube-system          kube-proxy-f8lc9                             1/1     Running   0          10s
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          23s
local-path-storage   local-path-provisioner-6bc4bddd6b-hvfxg      1/1     Running   0          9s

shows the cluster is up-and-ready.

When I take a look into events, there are only warnings (apart of normal ones) complaining about DNS:

kube-system          2m54s (x4 over 2m57s)   Warning   DNSConfigForming          Pod/kube-controller-manager-kind-control-plane   Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.89.0.1 fc00:f853:ccd:e793::1 192.168.1.10

So far, so good.

I hope this helps somebody.

Thank you!

@hadrabap
Copy link
Author

hadrabap commented Sep 3, 2023

Hello friends!

Are there any plans or tactic how to get this resolved?

Thank you.

@BenTheElder
Copy link
Member

Sorry ... I don't work with podman regularly, the Kubernetes project requires docker to develop, so this is something we're looking for contributors to help maintain. It is very time consuming to debug issues with arbitrary linux environments.

Thankfully, you've done that part, but it's stopped moving forward because this is only in run 1.2.x which is not released still.

We take normal runc updates regularly.

https://github.com/opencontainers/runc/pull/3805/commits => opencontainers/runc@da780e4 =>
opencontainers/runc@da780e4

This commit is only in the 1.2.x RCs, so it will be a while before we take it. We do not wish to make existing stable systems unstable.

I would recommend using ext4 to run containers, especially if you're going to do container-in-container, there have been are a LOT of problems with detecting filesystem info, mounts, etc. that are not limited to code in this repo or runc and sticking to the most widely used tools (docker, ext4) is the most reliable path. You can see a number of other issues in the tracker where other filesystems caused issues for kubelet etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/podman Issues or PRs related to podman kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

5 participants