Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

X Exiting due to RUNTIME_ENABLE: which crictl: exit status 1 stdout: #15914

Closed
chintan9999 opened this issue Feb 23, 2023 · 14 comments
Closed

X Exiting due to RUNTIME_ENABLE: which crictl: exit status 1 stdout: #15914

chintan9999 opened this issue Feb 23, 2023 · 14 comments
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@chintan9999
Copy link

What Happened?

Hello,

I have followed https://github.com/Mirantis/cri-dockerd#build-and-install this guide completely and then I try to run following command:

minikube start --vm-driver=none. It gives me this error.

  • minikube v1.29.0 on Ubuntu 18.04 (xen/amd64)
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is Ubuntu 18.04.6 LTS

X Exiting due to RUNTIME_ENABLE: which crictl: exit status 1
stdout:

stderr:

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

Attach the log file

I0223 02:43:47.958349 11702 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 02:43:47.958476 11702 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 02:43:47.971991 11702 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( )sandbox_image = .$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
W0223 02:43:47.979132 11702 start.go:450] cannot ensure containerd is configured properly and reloaded for docker - cluster might be unstable: update sandbox_image: sh -c "sudo sed -i -r 's|^( )sandbox_image = .$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml": exit status 2
stdout:

stderr:
sed: can't read /etc/containerd/config.toml: No such file or directory
I0223 02:43:47.979146 11702 start.go:483] detecting cgroup driver to use...
I0223 02:43:47.979166 11702 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 02:43:47.979262 11702 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 02:43:47.993161 11702 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0223 02:43:48.145862 11702 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0223 02:43:48.297816 11702 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0223 02:43:48.297840 11702 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0223 02:43:48.297846 11702 exec_runner.go:207] rm: /etc/docker/daemon.json
I0223 02:43:48.297899 11702 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (144 bytes)
I0223 02:43:48.298007 11702 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2740992494 /etc/docker/daemon.json
I0223 02:43:48.304925 11702 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0223 02:43:48.447825 11702 exec_runner.go:51] Run: sudo systemctl restart docker
I0223 02:43:48.714617 11702 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0223 02:43:48.870541 11702 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0223 02:43:49.034962 11702 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0223 02:43:49.161099 11702 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0223 02:43:49.308977 11702 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0223 02:43:49.323526 11702 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0223 02:43:49.323588 11702 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0223 02:43:49.326029 11702 start.go:551] Will wait 60s for crictl version
I0223 02:43:49.326075 11702 exec_runner.go:51] Run: which crictl
I0223 02:43:49.332026 11702 out.go:177]
W0223 02:43:49.334315 11702 out.go:239] X Exiting due to RUNTIME_ENABLE: which crictl: exit status 1
stdout:

stderr:

W0223 02:43:49.334341 11702 out.go:239] *
W0223 02:43:49.335425 11702 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 02:43:49.338507 11702 out.go:177]

Operating System

Ubuntu

Driver

None

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 25, 2023

If you run crictl version, what does it say ? Did you remember to install the cri-tools and the cni-plugins too ?

$ sudo crictl version
Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  20.10.12
RuntimeApiVersion:  v1
$ cri-dockerd --version
cri-dockerd 0.3.1 (9a87d6ae)

Currently the "none" driver in minikube requires the user to set up and configure their container runtime first.

@afbjorklund afbjorklund added co/none-driver kind/support Categorizes issue or PR as a support question. labels Feb 25, 2023
@chintan9999
Copy link
Author

Thanks for the update. Though, I have installed docker as container run time. should I need to explicitly installed any other cri-tools?

Does docker no longer supported as cri-tool by kubernetes?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 26, 2023

Before Kubernetes 1.24, the support for docker was included with the installation (called "dockershim").

After that, you need to make sure to install all the needed components:

docker, cri-dockerd, cri-tools, cni-plugins

Eventually minikube should be able to "provision" these as well, but right now it does not.

@avigupta63
Copy link

cri-tools, cni-plugins how can i install these?

@chintan9999
Copy link
Author

Thanks for your response @afbjorklund.

I can understand that docker and k8s, both are managed different way.

Though, I would suggest that it would be very convenient if minikube can able to provision these altogether rather separate cri-tools and plugin installation.

For example, previous k8s versions like 1.21 or before, it was easy to work with docker as CRI for k8s compare to current version.

I hope something is being developed for this as well at other end.

@chintan9999
Copy link
Author

@avigupta63 here is the link I used for cri-tools:

https://github.com/Mirantis/cri-dockerd

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 18, 2023

Though, I would suggest that it would be very convenient if minikube can able to provision these altogether rather separate cri-tools and plugin installation.

For example, previous k8s versions like 1.21 or before, it was easy to work with docker as CRI for k8s compare to current version.

This changed in 1.24, when upstream stopped supporting it (Docker)

But there is a backlog item, to improve the installation experience:

Currently it is mostly affecting the "none" driver, for advanced users.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 18, 2023

Also, to be fair, it is not harder to install a cluster using docker than for instance containerd ?

You need to set up CRI and CNI, the main difference is that docker is now less of a "special case"

@sumitdahatonde
Copy link

stderr:
find: ‘/etc/cni/net.d’: No such file or directory
^[[A
X Exiting due to RUNTIME_ENABLE: which crictl: exit status 1
stdout:

stderr:

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │

@avigupta63
Copy link

My minikube started with a error unable to disable bridges......when i check my nodes they weren't ready
can u help me with this @afbjorklund

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants