Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to start the minikube even after installing cri-dockerd #15407

Closed
vpuliyal opened this issue Nov 25, 2022 · 14 comments
Closed

Not able to start the minikube even after installing cri-dockerd #15407

vpuliyal opened this issue Nov 25, 2022 · 14 comments
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@vpuliyal
Copy link

What Happened?

sudo -E /usr/local/bin/minikube start --driver=none

  • minikube v1.28.0 on Redhat 8.6 (ppc64le)
    ! Both driver=none and vm-driver=none have been set.

    Since vm-driver is deprecated, minikube will default to driver=none.

    If vm-driver is set in the global config, please run "minikube config unset vm-driver" to resolve this warning.

  • Using the none driver based on user configuration

  • Starting control plane node minikube in cluster minikube

  • Running on localhost (CPUs=48, Memory=63349MB, Disk=71645MB) ...

  • Exiting due to NOT_FOUND_CRI_DOCKERD:

  • Suggestion:

    The none driver with Kubernetes v1.24+ and the docker container-runtime requires cri-dockerd.

    Please install cri-dockerd using these instructions:

    https://github.com/Mirantis/cri-dockerd#build-and-install

Installed cri-dockerd using https://github.com/Mirantis/cri-dockerd link.
[root@cloudalplp1 home]# which cri-dockerd
/usr/local/bin/cri-dockerd
########
Here I'm attaching minikube log file also.
log.txt

Attach the log file

log.txt

Operating System

Redhat/Fedora

Driver

No response

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 25, 2022

This is a quirk in RHEL7 and RHEL8 (I think it was fixed in RHEL9 ?), you have to install as /usr/bin/cri-dockerd

So, sudo which cri-dockerd fails. The check needs to consider this, just as it is doing for crictl at the moment...

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/none-driver co/runtime/docker Issues specific to a docker runtime priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Nov 25, 2022
@vpuliyal
Copy link
Author

]# sudo -E /usr/local/bin/minikube start --driver=none --container-runtime=''

  • minikube v1.28.0 on Redhat 8.6 (ppc64le)
    ! Both driver=none and vm-driver=none have been set.

    Since vm-driver is deprecated, minikube will default to driver=none.

    If vm-driver is set in the global config, please run "minikube config unset vm-driver" to resolve this warning.

  • Using the none driver based on user configuration

  • Starting control plane node minikube in cluster minikube

  • Running on localhost (CPUs=48, Memory=63349MB, Disk=71645MB) ...

  • OS release is Red Hat Enterprise Linux 8.6 (Ootpa)

X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
time="2022-11-25T02:29:37-06:00" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: connection refused""

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

[root@cloud cri-dockerd]# sudo crictl version
FATA[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: connection refused"
[root@cloud cri-dockerd]#

sudo which cri-dockerd

/bin/cri-dockerd

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 25, 2022

This needs to be working, before minikube can run: sudo crictl version

You might get some more details from sudo systemctl status cri-docker

@vpuliyal
Copy link
Author

sudo crictl version

FATA[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: connection refused"

sudo systemctl status cri-docker

● cri-docker.service - CRI Interface for Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://docs.mirantis.com

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 25, 2022

Maybe need to look at "crictl.socket" unit, or attempt a manual "sudo systemctl start cri-docker.service" to see the error.

Maybe even cri-dockerd --log-level debug, but it is supposed to be enough to install it - minikube will start cri-dockerd

@vpuliyal
Copy link
Author

]# sudo systemctl start cri-docker.service
cri-dockerd]# cri-dockerd --log-level debug
INFO[0000] Connecting to docker on the Endpoint unix:///var/run/docker.sock
INFO[0000] Start docker client with request timeout 0s
INFO[0000] Hairpin mode is set to none
DEBU[0000] Unable to update cni config: no networks found in /etc/cni/net.d
DEBU[0000] Unable to update cni config: no networks found in /etc/cni/net.d
INFO[0000] Loaded network plugin cni
INFO[0000] Docker cri networking managed by network plugin cni
DEBU[0000] Unable to update cni config: no networks found in /etc/cni/net.d
INFO[0000] Docker Info: &{ID:H4CP:YOUO:V267:C4UK:UDBB:NFRJ:OQYP:IFQQ:EP6Q:ZSIL:VVGB:OZYO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:37 SystemTime:2022-11-25T03:05:30.248310714-06:00 LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:4.18.0-372.9.1.el8.ppc64le OperatingSystem:Red Hat Enterprise Linux 8.6 (Ootpa) OSVersion:8.6 OSType:linux Architecture:ppc64le IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00025d340 NCPU:48 MemTotal:66426699776 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cloudalplp1.aus.stglabs.ibm.com Labels:[] ExperimentalBuild:false ServerVersion:v20.10.21 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:} runc:{Path:runc Args:[] Shim:}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:770bd0108c32f3fb5c73ae1264f7e503fe7b2661 Expected:770bd0108c32f3fb5c73ae1264f7e503fe7b2661} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: DefaultAddressPools:[] Warnings:[]}
INFO[0000] Setting cgroupDriver cgroupfs
INFO[0000] Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
INFO[0000] Starting the GRPC backend for the Docker CRI interface.
INFO[0000] Start cri-dockerd grpc backend
DEBU[0000] init pid ns is "pid:[4026531836]"
DEBU[0000] Pid 59487 pid ns is "pid:[4026531836]"
DEBU[0000] attempting to apply oom_score_adj of -999 to pid 59487
DEBU[0000] init pid ns is "pid:[4026531836]"
DEBU[0000] Pid 4832 pid ns is "pid:[4026531836]"
DEBU[0000] attempting to apply oom_score_adj of -999 to pid 4832
DEBU[0005] Unable to update cni config: no networks found in /etc/cni/net.d
DEBU[0010] Unable to update cni config: no networks found in /etc/cni/net.d
DEBU[0015] Unable to update cni config: no networks found in /etc/cni/net.d
DEBU[0020] Unable to update cni config: no networks found in /etc/cni/net.d

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 25, 2022

Seems happy enough, except that you will also need to configure CNI networks (minikube can do this for you)

It is normal to get such output about a missing network, until the cluster has been configured and is running.

--enable-default-cni

@vpuliyal
Copy link
Author

@afbjorklund thanks. It's working now

sudo -E /usr/local/bin/minikube start --driver=none --container-runtime=''

  • minikube v1.28.0 on Redhat 8.6 (ppc64le)
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is Red Hat Enterprise Linux 8.6 (Ootpa)

    kubeadm.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    kubectl: 41.94 MiB / 41.94 MiB [-----------] 100.00% 80.59 MiB p/s 700ms
    kubeadm: 40.88 MiB / 40.88 MiB [------------] 100.00% 21.64 MiB p/s 2.1s

    • Generating certificates and keys ...
    • Booting up control plane ...
    • Configuring RBAC rules ...
  • Configuring local host environment ...

! The 'none' driver is designed for experts who need to integrate with an existing VM

! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
*

  • sudo mv /root/.kube /root/.minikube $HOME
  • sudo chown -R $USER $HOME/.kube $HOME/.minikube
  • This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
  • Verifying Kubernetes components...
    • Using image gcr.io/k8s-minikube/storage-provisioner:v5
  • Enabled addons: default-storageclass, storage-provisioner
  • Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 25, 2022

Ignore all the output about drivers and users and configuration, it is also broken (or slightly misleading).

Well except for that part that you should not run as root, and that you can run kubernetes in rootless now.

@afbjorklund
Copy link
Collaborator

All of this is supposed to work out-of-the-box, but it is not being tested (and especially not on IBM/PPC)

Unfortunately, it is bound to break even worse soon since Kubernetes 1.26 does not support Docker - yet.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 23, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 25, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 24, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

4 participants