Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: brief description of the bug #4769

Open
zhengyazhao opened this issue Jun 5, 2024 · 1 comment
Open

BUG: brief description of the bug #4769

zhengyazhao opened this issue Jun 5, 2024 · 1 comment
Labels
kind/bug Something isn't working response-expired

Comments

@zhengyazhao
Copy link

Sealos Version

4.3.7

How to reproduce the bug?

1.使用yum 安装
2.执行创建单机集群
3. sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.27.7 registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 --single
4.centos7.9

What is the expected behavior?

正常运行、

What do you see instead?

[root@k8smaster ~]# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-22szr 0/1 Init:CrashLoopBackOff 20 (24s ago) 78m
kube-system cilium-operator-86666d88cb-dnfhx 1/1 Running 0 78m
kube-system coredns-5d78c9869d-ggk9p 0/1 Pending 0 78m
kube-system coredns-5d78c9869d-qgd2c 0/1 Pending 0 78m
kube-system etcd-k8smaster 1/1 Running 12 78m
kube-system kube-apiserver-k8smaster 1/1 Running 12 78m
kube-system kube-controller-manager-k8smaster 1/1 Running 12 78m
kube-system kube-proxy-pfngx 1/1 Running 0 78m
kube-system kube-scheduler-k8smaster 1/1 Running 12 78m

查看日志:
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..

kubelet:

I0605 20:23:00.421613 5697 server.go:415] "Kubelet version" kubeletVersion="v1.27.7"
I0605 20:23:00.421677 5697 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0605 20:23:00.421895 5697 server.go:578] "Standalone mode, no API client"
I0605 20:23:00.422011 5697 container_manager_linux.go:802] "CPUAccounting not enabled for process" pid=5697
I0605 20:23:00.422020 5697 container_manager_linux.go:805] "MemoryAccounting not enabled for process" pid=5697
I0605 20:23:00.426873 5697 server.go:466] "No api server defined - no events will be sent to API server"
I0605 20:23:00.426888 5697 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I0605 20:23:00.427421 5697 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0605 20:23:00.427488 5697 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
I0605 20:23:00.427515 5697 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0605 20:23:00.427529 5697 container_manager_linux.go:301] "Creating device plugin manager"
I0605 20:23:00.427572 5697 state_mem.go:36] "Initialized new in-memory state store"
I0605 20:23:00.431001 5697 kubelet.go:411] "Kubelet is running in standalone mode, will skip API server sync"
I0605 20:23:00.431434 5697 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.15" apiVersion="v1"
I0605 20:23:00.431651 5697 volume_host.go:75] "KubeClient is nil. Skip initialization of CSIDriverLister"
W0605 20:23:00.432240 5697 csi_plugin.go:189] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
W0605 20:23:00.432259 5697 csi_plugin.go:266] Skipping CSINode initialization, kubelet running in standalone mode
E0605 20:23:00.432366 5697 safe_sysctls.go:62] "Kernel version is too old, dropping net.ipv4.ip_local_reserved_ports from safe sysctl list" kernelVersion="3.10.0"
I0605 20:23:00.432500 5697 server.go:1168] "Started kubelet"
I0605 20:23:00.432828 5697 kubelet.go:1548] "No API server defined - no node status update will be sent"
I0605 20:23:00.432922 5697 server.go:194] "Starting to listen read-only" address="0.0.0.0" port=10255
I0605 20:23:00.434664 5697 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
I0605 20:23:00.434729 5697 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
I0605 20:23:00.434909 5697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
E0605 20:23:00.435607 5697 server.go:794] "Failed to start healthz server" err="listen tcp 127.0.0.1:10248: bind: address already in use"
E0605 20:23:00.435846 5697 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
I0605 20:23:00.435859 5697 server.go:461] "Adding debug handlers to kubelet server"
E0605 20:23:00.435879 5697 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0605 20:23:00.436149 5697 volume_manager.go:284] "Starting Kubelet Volume Manager"
I0605 20:23:00.436339 5697 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
E0605 20:23:00.436476 5697 server.go:179] "Failed to listen and serve" err="listen tcp 0.0.0.0:10250: bind: address already in use"

crictl:
[root@k8smaster ~]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
bde21b3a0d030 9dd45d0a36c9e About an hour ago Running cilium-operator 0 b302f86dd95a2 cilium-operator-86666d88cb-dnfhx
a6df0160e0dde 21dd3d6f9c60d About an hour ago Running kube-proxy 0 93053c30c7029 kube-proxy-pfngx
bee6716da8735 58cbecfde1998 About an hour ago Running kube-controller-manager 12 e1a8c7a48c7d8 kube-controller-manager-k8smaster
3a1adb4be83ec 44e520c7a8226 About an hour ago Running kube-apiserver 12 40c7e5974affe kube-apiserver-k8smaster
23124a7bc7e7f c8c40891e65bd About an hour ago Running kube-scheduler 12 d6c6d6a8e48cf kube-scheduler-k8smaster
cd68f2af5bed6 73deb9a3f7025 About an hour ago Running etcd 12 f57813da9d5f0 etcd-k8smaster

crictl images:
[root@k8smaster ~]# crictl images
IMAGE TAG IMAGE ID SIZE
sealos.hub:5000/cilium/cilium v1.13.4 d00a7abfa71a6 174MB
sealos.hub:5000/cilium/operator v1.13.4 9dd45d0a36c9e 30.8MB
sealos.hub:5000/coredns/coredns v1.10.1 ead0a4a53df89 16.2MB
sealos.hub:5000/etcd 3.5.9-0 73deb9a3f7025 103MB
sealos.hub:5000/kube-apiserver v1.27.7 44e520c7a8226 33.5MB
sealos.hub:5000/kube-controller-manager v1.27.7 58cbecfde1998 31MB
sealos.hub:5000/kube-proxy v1.27.7 21dd3d6f9c60d 23.9MB
sealos.hub:5000/kube-scheduler v1.27.7 c8c40891e65bd 18.2MB
sealos.hub:5000/pause 3.9 e6f1816883972 319kB

Operating environment

- Sealos version:
- Docker version:
- Kubernetes version:
- Operating system:
- Runtime environment:
- Cluster size:
- Additional information:

Additional information

No response

@zhengyazhao zhengyazhao added the kind/bug Something isn't working label Jun 5, 2024
Copy link

stale bot commented Aug 4, 2024

This issue has been automatically closed because we haven't heard back for more than 60 days, please reopen this issue if necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working response-expired
Projects
None yet
Development

No branches or pull requests

1 participant