Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crio: Unable to pull images form internal registry #10171

Open
kameshsampath opened this issue Jan 19, 2021 · 15 comments
Open

crio: Unable to pull images form internal registry #10171

kameshsampath opened this issue Jan 19, 2021 · 15 comments
Labels
area/registry registry related issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@kameshsampath
Copy link

kameshsampath commented Jan 19, 2021

minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1

Steps to reproduce the issue:

  1. Start Minikube as
minikube start -p $PROFILE_NAME \
  --memory=$MEMORY --cpus=$CPUS \
  --disk-size=50g \
  --insecure-registry='10.0.0.0/24'
  1. Apply the registry and registry-aliases add-on
  2. Build and push an image to the Minikube registry as example.com/demo/greeter
  3. Deploy a pod using the command
kubectl run demo-greeter -n tektontutorial \
 --generator='run-pod/v1' \
 --image='example.com/demo/greeter' && \
kubectl expose pod demo-greeter -n tektontutorial --port 8080 --type=NodePort

The kubectl get events shows the crashed pod with the following logs:

Full output of failed command:

0s          Warning   Failed      pod/demo-greeter                        Failed to pull image "registry.minikube/rhdevelopers/tekton-tutorial-greeter": rpc error: code = Unknown desc = error pinging docker registry registry.minikube: Get https://registry.minikube/v2/: dial tcp 10.101.166.84:443: connect: connection refused

With --insecure-registry specified, the pod still tries to resolve the url as https than http

CC: @afbjorklund

@kameshsampath
Copy link
Author

some more contextual info

minikube ssh -- cat /etc/hosts

127.0.0.1       localhost
127.0.1.1 tektontutorial
192.168.64.1    host.minikube.internal
192.168.64.20   control-plane.minikube.internal
10.101.166.84   example.org
10.101.166.84   example.com
10.101.166.84   test.com
10.101.166.84   test.org
10.101.166.84   registry.minikube

kubectl get svc -n kube-system

AME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   79m
registry   ClusterIP   *10.101.166.84*   <none>        80/TCP,443/TCP           78m

kubectl get cm coredns -n kube-system -o yaml | yq eval '.data' -

Corefile: |-
  .:53 {
      errors
       rewrite name example.org  registry.kube-system.svc.cluster.local
  rewrite name example.com  registry.kube-system.svc.cluster.local
  rewrite name test.com  registry.kube-system.svc.cluster.local
  rewrite name test.org  registry.kube-system.svc.cluster.local
  rewrite name registry.minikube  registry.kube-system.svc.cluster.local

      health {
         lameduck 5s
      }
      ready
      kubernetes cluster.local in-addr.arpa ip6.arpa {
         pods insecure
         fallthrough in-addr.arpa ip6.arpa
         ttl 30
      }
      prometheus :9153
      forward . /etc/resolv.conf {
         max_concurrent 1000
      }
      cache 30
      loop
      reload
      loadbalance
  }

@afbjorklund
Copy link
Collaborator

Worked here.

  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  4s               default-scheduler  Successfully assigned default/busybox to minikube
  Normal   Pulled     4s               kubelet            Successfully pulled image "registry.minikube/busybox" in 16.245797ms
  Normal   Pulling    3s (x2 over 4s)  kubelet            Pulling image "registry.minikube/busybox"
  Normal   Created    3s (x2 over 4s)  kubelet            Created container busybox
  Normal   Started    3s (x2 over 3s)  kubelet            Started container busybox
  Normal   Pulled     3s               kubelet            Successfully pulled image "registry.minikube/busybox" in 18.747139ms
  Warning  BackOff    1s (x2 over 2s)  kubelet            Back-off restarting failed container

You might want to double-check minikube ssh docker info:

 Insecure Registries:
  10.96.0.0/12
  10.0.0.0/24
  127.0.0.0/8

@afbjorklund afbjorklund added kind/support Categorizes issue or PR as a support question. area/registry registry related issues labels Jan 19, 2021
@kameshsampath
Copy link
Author

that sound weird ! was it on macOS ?

Let me try again on a fresh instance, do you see any issue with minikube start command ??

@kameshsampath
Copy link
Author

after a bit of debug @afbjorklund , this happens only with container-runtime=cri-o, if I use container-runtime=docker then all works well. Wondering something to do with internal work on cri-o?

@medyagh
Copy link
Member

medyagh commented Mar 3, 2021

@kameshsampath interesting do we still have this issue on latest minikube ? I wonder if this issue happens if we specify a different CNI ?

@medyagh medyagh changed the title Unable to pull images form internal registry crio: Unable to pull images form internal registry Mar 3, 2021
@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. co/runtime/crio CRIO related issues labels Mar 3, 2021
@kameshsampath
Copy link
Author

kameshsampath commented Mar 8, 2021 via email

@sharifelgamal sharifelgamal removed the kind/support Categorizes issue or PR as a support question. label Apr 7, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 6, 2021
@k8s-triage-robot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 5, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ricardozanini
Copy link

ricardozanini commented Apr 2, 2024

Just for your information, I got in this issue today with the latest Minikube version. Same setup MacOS, podman, and cri-o. I also started the cluster with the flag --insecure-registry=10.0.0.0/24. The kubelet controller insists on using HTTPS instead of HTTP.

The problem is that the registry service is created exposing the ports 80:5000 and 443:443. But the registry pod, only exposes 5000. So obviously, kubelet or any client won't be able to pull from 443.

I think this is a bug @kameshsampath @medyagh

Cheers!

@ricardozanini
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@ricardozanini: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@medyagh medyagh reopened this Apr 2, 2024
@medyagh
Copy link
Member

medyagh commented Apr 2, 2024

he latest Minikube version. Same setup MacOS, podman, and cri-o. I also started the cluster with the flag --insecure-registry=10.0.0.0/24. The kubelet controller insists on using HTTPS instead of HTTP.

The problem is that the registry service is created exposing the ports 80:5000 and 443:443. But the registry pod, only exposes 5000. So obviously, kubelet or any client won't be able to pull from 443.

ricardozanini
would you be interested to make a PR to fix this ?

of the issue is Container needs to open a non 443 port you can do that in the Kic Package

@ricardozanini
Copy link

@medyagh you bet I would! :)

I'll take a look later today and send a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/registry registry related issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

8 participants