Skip to content

all pod stuck in Pending #271

Closed
Closed
@yueyuan4

Description

@yueyuan4

Hi,I did this in order

kubectl apply -f configure/docker-storageclass-broker.yml
kubectl apply -f configure/docker-storageclass-zookeeper.yml
kubectl apply -f 00-namespace.yml
kubectl apply -f rbac-namespace-default/
kubectl apply -f zookeeper/
kubectl apply -f kafka/

I find all pod stuck in Pending

kubectl -n kafka get all

NAME          READY   STATUS    RESTARTS   AGE
pod/kafka-0   0/1     Pending   0          7m27s
pod/kafka-1   0/1     Pending   0          7m27s
pod/kafka-2   0/1     Pending   0          7m27s
pod/pzoo-0    0/1     Pending   0          7m35s
pod/pzoo-1    0/1     Pending   0          7m35s
pod/pzoo-2    0/1     Pending   0          7m35s
pod/zoo-0     0/1     Pending   0          7m35s
pod/zoo-1     0/1     Pending   0          7m35s

NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/bootstrap   ClusterIP   10.103.133.150   <none>        9092/TCP            7m27s
service/broker      ClusterIP   None             <none>        9092/TCP            7m27s
service/pzoo        ClusterIP   None             <none>        2888/TCP,3888/TCP   7m35s
service/zoo         ClusterIP   None             <none>        2888/TCP,3888/TCP   7m35s
service/zookeeper   ClusterIP   10.104.4.80      <none>        2181/TCP            7m35s

NAME                     DESIRED   CURRENT   AGE
statefulset.apps/kafka   3         3         7m27s
statefulset.apps/pzoo    3         3         7m35s
statefulset.apps/zoo     2         2         7m35s

kubectl -n kafka describe pod zoo-0

Name:               zoo-0
Namespace:          kafka
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=zookeeper
                    controller-revision-hash=zoo-7d44fdcc4b
                    statefulset.kubernetes.io/pod-name=zoo-0
                    storage=persistent-regional
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      StatefulSet/zoo
Init Containers:
  init-config:
    Image:      solsson/kafka-initutils@sha256:f6d9850c6c3ad5ecc35e717308fddb47daffbde18eb93e98e031128fe8b899ef
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/bash
      /etc/kafka-configmap/init.sh
    Environment:
      ID_OFFSET:  4
    Mounts:
      /etc/kafka from config (rw)
      /etc/kafka-configmap from configmap (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zrcnv (ro)
Containers:
  zookeeper:
    Image:       solsson/kafka:2.2.0@sha256:cf048d6211b6b48f1783f97cb41add511386e2f0a5f5c62fa0eee9564dcd3e9a
    Ports:       2181/TCP, 2888/TCP, 3888/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      ./bin/zookeeper-server-start.sh
      /etc/kafka/zookeeper.properties
    Limits:
      memory:  120Mi
    Requests:
      cpu:      10m
      memory:   100Mi
    Readiness:  exec [/bin/sh -c [ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      KAFKA_LOG4J_OPTS:  -Dlog4j.configuration=file:/etc/kafka/log4j.properties
    Mounts:
      /etc/kafka from config (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zrcnv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-zoo-0
    ReadOnly:   false
  configmap:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      zookeeper-config
    Optional:  false
  config:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  default-token-zrcnv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zrcnv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  9m1s (x10 over 9m22s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims

Thank you for your help.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions