Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none driver: remove internal calls to "sudo" #10485

Open
pkjasawat opened this issue Feb 15, 2021 · 6 comments
Open

none driver: remove internal calls to "sudo" #10485

pkjasawat opened this issue Feb 15, 2021 · 6 comments
Labels
co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/linux priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@pkjasawat
Copy link

Steps to reproduce the issue:

1.installed docker ee v19.03.14. docker instance is up and running
2. installed minikube v1.2.0 as root.
3.Now trying to start minikube cluster (minikube start --vm-driver=none)

Full output of failed command:

Full output of minikube start command used, if not already included:
[root@myhostname ~]# minikube start --vm-driver=none

  • minikube v1.17.1 on Redhat 7.9
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is Red Hat Enterprise Linux Server 7.9 (Maipo)
  • Found network options:
    • NO_PROXY=my no_proxy url
    • http_proxy=my http_proxy url
    • https_proxy=my https_proxy url
      ! This bare metal machine is having trouble accessing https://k8s.gcr.io
  • To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
  • Preparing Kubernetes v1.20.2 on Docker 19.03.14 ...
    • env NO_PROXY=my no_proxy url
    • env HTTP_PROXY=my http_proxy url
    • env HTTPS_PROXY=my https_proxy url

X Exiting due to GUEST_CERT: certificate symlinks: create symlink for /usr/share/ca-certificates/minikubeCA.pem: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem": exit status 126
stdout:

stderr:
/bin/bash: /bin/ln: Permission denied

Optional: Full output of minikube logs command:
[root@myhostname /]# minikube logs

  • The control plane node must be running for this command
    • To start a cluster, run: "minikube start"

could you please let me know how to fix this issue. I just have started learning kubernetes and not an expert in this area.

@priyawadhwa
Copy link

Hey @pkjasawat thanks for opening this issue. Looks like you are using the none driver with insufficient permissions:

/bin/bash: /bin/ln: Permission denied

You may need to try running sudo minikube start --driver none for this to work. Please let us know if that helps. Thank you!

@priyawadhwa priyawadhwa added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Feb 24, 2021
@pkjasawat
Copy link
Author

Hi @priyawadhwa ,

Thanks for your response.

I tried that but unfortunately it did not work. In my opinion I am getting permission denied because its trying to run sudo from root user and which is not allowed.

steps I also tried as root:

  1. copied failed command from error logs and tried to execute manually and got the same error (sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem")
  2. removed sudo from the command and triggered manually and it worked. Symlinks was created successfully.
  3. When I tried to start minikube cluster again as root ([root@myhostname ~]# minikube start --vm-driver=none) got same error. Because its not able to find created symlinks and trying to create them again.
  4. minikube start --vm-driver=none - working fine without any issue with non root user(having sudo access).

Conclusion:

  1. while starting minikube cluster as root , internally it should not use sudo to execute any commands. Because in most of the cases root does not allowed to sudo others.

Please let me know if my understanding is correct and if it can be fixed?

@priyawadhwa
Copy link

Hey @pkjasawat glad you were able to get that working! And your idea seems reasonable -- if you, or anyone else, would be interested in removing the calls to sudo within the code for the none driver we'd be happy to accept a PR.

@priyawadhwa priyawadhwa changed the title minikube start error-Exiting due to GUEST_CERT: certificate symlinks: create symlink failed none driver: remove internal calls to "sudo" Feb 25, 2021
@priyawadhwa priyawadhwa added kind/bug Categorizes issue or PR as related to a bug. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed triage/needs-information Indicates an issue needs more information in order to work on it. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. labels Feb 25, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 26, 2021

Normally we don't recommend running minikube as root, but then again Katacoda* is doing it already. It is indeed unnecessary to use sudo, when running as root. We don't want users running "sudo minikube", since it makes the configuration paths strange.

* https://kubernetes.io/docs/tutorials/hello-minikube/

However, on the normal systems root is allowed to run all commands with sudo:

# User privilege specification
root	ALL=(ALL:ALL) ALL

So this looks specific to RHEL. Either way, would be nice to abstract the sudo away.

@spowelljr spowelljr removed the kind/support Categorizes issue or PR as a support question. label Apr 7, 2021
@spowelljr spowelljr added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 19, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 18, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 17, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 1, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Dec 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/linux priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants