Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about registry authentication #1840

Closed
mcandre opened this issue Aug 18, 2017 · 5 comments
Closed

Confusion about registry authentication #1840

mcandre opened this issue Aug 18, 2017 · 5 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mcandre
Copy link

mcandre commented Aug 18, 2017

I've tried a few different ways of configuring minikube to authenticate to different Docker registries, but none of them seem to work.

  • minikube addons configure registry-creds -> Fill in credentials for a private registry. However, successive helm install deploys continue to produce ImagePullBackOff status in Kubernetes. Unknown where this command persists credentials, so difficult to debug. Tried minikube stop; minikube start to force the authentication to reload, but no change in behavior.
  • Also ran minikube addons enable registry-creds and minikube addons enable registry just in case, but no change in behavior.
  • minikube ssh -> docker login [<URL>]. This produces a /home/docker/.docker/config.json file in minikube, but helm install deploys continue to produce ImagePullBackOff status in Kubernetes. I tried minikube stop; minikube start, but the resulting minikube VM then deletes the /home/docker/.docker/config.json file.
  • minikube ssh -> mkdir -p /home/docker/.docker -> vi /home/docker/.docker/config.json, inserting Docker registry configuration manually. Same behavior as before.
  • No idea if extra steps are required after the above commands in order to fully complete registry authentication for Kubernetes. The docs are kind of scattered and incomplete.
  • Also tried creating a regsecret according to the Kubernetes docs, but this appears to require each pod to be specifically configured to point to the secret; there's no magic secret name that would get automatically applied.

There are so many different ways to configure registry authentication for minikube, I have no idea which one to use, and no idea how to force Kubernetes to synchronize against the updated credentials. Some tutorials even suggest manipulating Kubernetes secrets as yet another way to configure things. Can someone produce a complete set of steps for getting minikube to authenticate to a private registry? Because there's a lot of deprecated tutorials, alternate configuration methods, and brokenness here.

I'm using minikube v0.21.0 on macOS 10.12.

By the way, is there some minikube configuration for reusing the host's registry credentials? That would dramatically simply things.

@r2d4 r2d4 added the kind/documentation Categorizes issue or PR as related to documentation. label Aug 24, 2017
@shevagit
Copy link

shevagit commented Aug 29, 2017

Having the exact same problem (same versions too). Although I am able to successfully login to my private docker registry (docker login private.registry:443) as well as pull some images on the host(Mac) after adding the certificate file to the Mac keychain.
Manually inserting the config.json to /home/docker/.docker did not work for me as well as inserting the certificate file to /var/lib/boot2docker on the minikube VM. - issue

@shevagit
Copy link

Hello,
After messing up with minikube in extent, as well as trying all possible combinations, I came up with the following workaround:

  1. add the FULL name of your private registry in '/etc/docker/certs.d' in minikube (create the dir) and place your cert inside using the name "ca.crt" - note: this dir gets wiped when you stop minikube
  2. create a secret in your kubernetes cluster containing usr/pass of your private registry
  3. add the code 'imagePullSecrets: - name: to your deployment file - otherwise you get a “image not found error”
    *Important - You have to use the minikube docker daemon in order to use the private registry $ eval $(minikube docker-env)

lastly but not tested with this configuration, add the secret you created to your service account so that you don't need to add it to every pod file.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 4, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 8, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants