Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: brief descricontainer_linux.go:318: starting container process caused "process_linux.go:281: applying cgroup configuration for process caused \"No such device or address\""ption of the bug #4836

Closed
SupRenekton opened this issue Jul 2, 2024 · 11 comments
Labels
kind/bug Something isn't working response-expired

Comments

@SupRenekton
Copy link

Sealos Version

sealos_4.3.7_linux_arm64.tar.gz

How to reproduce the bug?

国产linux系统:4.19.90-24.4.v2101.ky10.aarch64 #1 SMP Mon May 24 14:45:37 CST 2021 aarch64 aarch64 aarch64 GNU/Linux
sealos gen labring/kubernetes:v1.25.16
labring/helm:v3.13.2
labring/calico:v3.24.6
--masters x.x.x.x,x.x.x.x,x.x.x.x --output Clusterfile
Clusterfile配置没有修改,直接apply:sealos apply -f Clusterfile

k8s集群建起来后,只有coredns显示pod没起来,CrashLoopBackOff状态
报错信息:
failed to create containerd task: failed to create shim task : OCI runtime create failed: container_linux.go:318: starting container process caused "process_linux.go:281: applying cgroup configuration for process caused "No such device or address""

What is the expected behavior?

No response

What do you see instead?

2个coredns pod CrashLoopBackOff,其他pod状态正常
failed to create containerd task: failed to create shim task : OCI runtime create failed: container_linux.go:318: starting container process caused "process_linux.go:281: applying cgroup configuration for process caused "No such device or address""

Operating environment

- Sealos version: sealos_4.3.7_linux_arm64.tar.gz
- Docker version: 无
- Kubernetes version: kubernetes:v1.25.16
- Operating system: 4.19.90-24.4.v2101.ky10.aarch64
- Runtime environment: 
- Cluster size: 3个Master节点
- Additional information:
  labring/helm:v3.13.2
  labring/calico:v3.24.6

Additional information

No response

@SupRenekton SupRenekton added the kind/bug Something isn't working label Jul 2, 2024
@bxy4543
Copy link
Member

bxy4543 commented Jul 2, 2024

安装前检查下节点是不是已经安装了runc

@sealos-ci-robot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Before installation, check whether the node has runc installed.

@SupRenekton
Copy link
Author

安装前检查下节点是不是已经安装了runc

是有containerd的,然后现在我改了下containerd的cgroup配置,重启了下containerd后coredns正常了,不过有点奇怪,本来kubelet的cgroup配置应该和运行时的cgroup保持一致的,我三台麒麟的系统上kubelet的cgroup是systemd,containerd的cgroup改成SystemdCgroup = false后反而正常了,不是到是不是国产的麒麟系统有什么bug

@sealos-ci-robot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Before installation, check whether runc has been installed on the node

There is containerd, and now I have changed the cgroup configuration of containerd. After restarting containerd, coredns is normal, but it is a bit strange. Originally, the cgroup configuration of kubelet should be consistent with the cgroup at runtime. I have three Kirin systems. The cgroup on kubelet is systemd. After changing the cgroup of containerd to SystemdCgroup = false, it becomes normal. Is there any bug in the domestic Kirin system?

@bxy4543
Copy link
Member

bxy4543 commented Jul 3, 2024

安装前检查下节点是不是已经安装了runc

是有containerd的,然后现在我改了下containerd的cgroup配置,重启了下containerd后coredns正常了,不过有点奇怪,本来kubelet的cgroup配置应该和运行时的cgroup保持一致的,我三台麒麟的系统上kubelet的cgroup是systemd,containerd的cgroup改成SystemdCgroup = false后反而正常了,不是到是不是国产的麒麟系统有什么bug

因为系统上本身安装了导致的,需要先卸载掉,新版本安装前应该会检测的,

@sealos-ci-robot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Before installation, check whether the node has runc installed.

There is containerd, and now I have changed the cgroup configuration of containerd. After restarting containerd, coredns is normal, but it is a bit strange. Originally, the cgroup configuration of kubelet should be consistent with the cgroup at runtime. I have three Kylins. The cgroup of kubelet on the system is systemd. After changing the cgroup of containerd to SystemdCgroup = false, it becomes normal. Is there any bug in the domestic Kirin system?

Because it is installed on the system itself, it needs to be uninstalled first. The new version should be detected before installation.

@fengxsong
Copy link
Collaborator

国产化的系统都只能用cgroupfs,至少我遇到的都是这样。

@sealos-ci-robot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Domestic systems can only use cgroupfs, at least that's what I've encountered.

@cuisongliu
Copy link
Collaborator

国产化的系统都只能用cgroupfs,至少我遇到的都是这样。

+1

@sealos-ci-robot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Domestic systems can only use cgroupfs, at least that's what I've encountered.

+1

Copy link

stale bot commented Sep 10, 2024

This issue has been automatically closed because we haven't heard back for more than 60 days, please reopen this issue if necessary.

@stale stale bot closed this as completed Nov 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working response-expired
Projects
None yet
Development

No branches or pull requests

5 participants