Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containerd and crio should default to --network-plugin-cni --enable-default-cni #3567

Closed
tstromberg opened this issue Jan 22, 2019 · 2 comments
Labels
co/runtime/containerd co/runtime/crio CRIO related issues good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@tstromberg
Copy link
Contributor

With v0.33.1 and:

minikube start --vm-driver=kvm2 --container-runtime=containerd with or without --network-plugin-cni,
CoreDNS is stuck in pending:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE
default       foo-56c8f67c67-58gfk                   0/1       Pending             0          6m
kube-system   coredns-86c58d9df4-2nh9j               0/1       ContainerCreating   0          6m
kube-system   coredns-86c58d9df4-k847v               0/1       ContainerCreating   0          6m
kube-system   etcd-minikube                          1/1       Running             0          5m
kube-system   kube-addon-manager-minikube            1/1       Running             0          5m
kube-system   kube-apiserver-minikube                1/1       Running             0          5m
kube-system   kube-controller-manager-minikube       1/1       Running             0          5m
kube-system   kube-proxy-t5nlj                       1/1       Running             0          6m
kube-system   kube-scheduler-minikube                1/1       Running             0          5m
kube-system   kubernetes-dashboard-ccc79bfc9-679n5   0/1       Pending             0          2m
kube-system   storage-provisioner                    0/1       Pending             0          6m

$ kubectl describe pod coredns-86c58d9df4-k847v -n kube-system
Name:               coredns-86c58d9df4-k847v
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               minikube/192.168.122.227
Start Time:         Tue, 22 Jan 2019 10:33:51 -0800
Labels:             k8s-app=kube-dns
                    pod-template-hash=86c58d9df4
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-86c58d9df4
Containers:
  coredns:
    Container ID:  
    Image:         k8s.gcr.io/coredns:1.2.6
    Image ID:      
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-rgjp8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-rgjp8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-rgjp8
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason           Age                From               Message
  ----     ------           ----               ----               -------
  Normal   Scheduled        6m                 default-scheduler  Successfully assigned kube-system/coredns-86c58d9df4-k847v to minikube
  Warning  NetworkNotReady  1m (x152 over 6m)  kubelet, minikube  network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized]

This works perfectly:

minikube start --vm-driver=kvm2 --container-runtime=containerd --network-plugin=cni --enable-default-cni

If --container-runtime=containerd is specified, we should default to good defaults to improve the user experience.

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. co/runtime/containerd priority/backlog Higher priority than priority/awaiting-more-evidence. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. labels Jan 22, 2019
@tstromberg tstromberg changed the title containerd w/o enable-default-cni: coredns Pending: runtime network not ready: cni plugin not initialized containerd and crio should default to --network-plugin-cni --enable-default-cni Jan 25, 2019
@tstromberg tstromberg added the co/runtime/crio CRIO related issues label Jan 25, 2019
@afbjorklund
Copy link
Collaborator

Probably we can put the socket overrides in slightly smaller print, as well ? (the "extended version")
With the latest kubernetes checks, stating the CRI being used is now mandatory anyway...

https://github.com/kubernetes/minikube/blob/master/docs/alternative_runtimes.md

    --cri-socket=/var/run/crio/crio.sock \
    --extra-config=kubelet.container-runtime=remote \
    --extra-config=kubelet.container-runtime-endpoint=unix:///var/run/crio/crio.sock \
    --extra-config=kubelet.image-service-endpoint=unix:///var/run/crio/crio.sock

Same story there, the defaults are good enough.

@afbjorklund
Copy link
Collaborator

@tstromberg : this feature would be nice to have in 1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/containerd co/runtime/crio CRIO related issues good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

2 participants