Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s v1.24.0-alpha.1: The image 'k8s.gcr.io/coredns/coredns:1.8.4' was not found; unable to add it to cache. #13136

Closed
tstromberg opened this issue Dec 9, 2021 · 8 comments Β· Fixed by #14006
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tstromberg
Copy link
Contributor

What Happened?

Using minikube from HEAD: 90300a4

./out/minikube start --kubernetes-version=v1.24.0-alpha.1:

πŸ˜„  minikube v1.24.0 on Darwin 11.5.2
✨  Automatically selected the hyperkit driver
...
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
❗  The image 'k8s.gcr.io/coredns/coredns:1.8.4' was not found; unable to add it to cache.
🐳  Preparing Kubernetes v1.24.0-alpha.1 on Docker 20.10.8 ...
❌  Unable to load cached images: loading cached images: stat /Users/tstromberg/.minikube/cache/images/k8s.gcr.io/coredns/coredns_1.8.4: no such file or directory

It does successfully progress

Attach the log file

Filtered for coredns:

I1209 15:26:17.455637   91527 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:1.8.4
I1209 15:26:17.496764   91527 image.go:180] daemon lookup for k8s.gcr.io/coredns/coredns:1.8.4: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
W1209 15:26:17.499631   91527 image.go:190] authn lookup for k8s.gcr.io/coredns/coredns:1.8.4 (trying anon): error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``
I1209 15:26:17.909258   91527 image.go:194] remote lookup for k8s.gcr.io/coredns/coredns:1.8.4: GET https://k8s.gcr.io/v2/coredns/coredns/manifests/1.8.4: MANIFEST_UNKNOWN: Failed to fetch "1.8.4" from request "/v2/coredns/coredns/manifests/1.8.4".
I1209 15:26:17.909368   91527 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:1.8.4" -> "/Users/tstromberg/.minikube/cache/images/k8s.gcr.io/coredns/coredns_1.8.4" took 454.875267ms
W1209 15:26:17.909472   91527 out.go:241] ❗  The image 'k8s.gcr.io/coredns/coredns:1.8.4' was not found; unable to add it to cache.
I1209 15:26:34.984641   91527 cache_images.go:83] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.0-alpha.1 k8s.gcr.io/kube-controller-manager:v1.24.0-alpha.1 k8s.gcr.io/kube-scheduler:v1.24.0-alpha.1 k8s.gcr.io/kube-proxy:v1.24.0-alpha.1 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:1.8.4 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
I1209 15:26:34.985874   91527 image.go:76] couldn't find image digest k8s.gcr.io/coredns/coredns:1.8.4 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I1209 15:26:34.985921   91527 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:1.8.4
I1209 15:26:34.986257   91527 image.go:180] daemon lookup for k8s.gcr.io/coredns/coredns:1.8.4: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
W1209 15:26:34.986630   91527 image.go:190] authn lookup for k8s.gcr.io/coredns/coredns:1.8.4 (trying anon): error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``
I1209 15:26:35.885809   91527 image.go:194] remote lookup for k8s.gcr.io/coredns/coredns:1.8.4: GET https://k8s.gcr.io/v2/coredns/coredns/manifests/1.8.4: MANIFEST_UNKNOWN: Failed to fetch "1.8.4" from request "/v2/coredns/coredns/manifests/1.8.4".
I1209 15:26:35.885832   91527 image.go:93] error retrieve Image k8s.gcr.io/coredns/coredns:1.8.4 ref Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I1209 15:26:35.885850   91527 cache_images.go:111] "k8s.gcr.io/coredns/coredns:1.8.4" needs transfer: got empty img digest "" for k8s.gcr.io/coredns/coredns:1.8.4
I1209 15:26:35.885877   91527 docker.go:239] Removing image: k8s.gcr.io/coredns/coredns:1.8.4
I1209 15:26:35.886017   91527 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns/coredns:1.8.4
I1209 15:26:35.969676   91527 cache_images.go:281] Loading image from: /Users/tstromberg/.minikube/cache/images/k8s.gcr.io/coredns/coredns_1.8.4
W1209 15:27:05.604679   91527 out.go:241] ❌  Unable to load cached images: loading cached images: stat /Users/tstromberg/.minikube/cache/images/k8s.gcr.io/coredns/coredns_1.8.4: no such file or directory

Operating System

macOS (Default)

Driver

HyperKit

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 10, 2021

Should have been bumped to k8s.gcr.io/coredns/coredns:v1.8.6 (with a silent v)

So this looks broken, upstream moved and renamed the image already in k8s 1.21 ?

589eea9

Looks like it was broken with PR #12084, which doesn't have any tests (for k8s 1.24)

No error handling, so it just returns a non-existing image for unknown k8s versions...

@afbjorklund afbjorklund added the kind/bug Categorizes issue or PR as related to a bug. label Dec 10, 2021
@medyagh
Copy link
Member

medyagh commented Dec 14, 2021

the automation has not added the constants for alpha versions of kubernetes therfore the preload didn't have those images. so I think that must be normal

our constants did not have alpha versions https://github.com/kubernetes/minikube/blob/master/pkg/minikube/constants/constants_kubeadm_images.go#L21

however I have seen race conditions of cache image on slow machines happen on other versions of k8s

@medyagh
Copy link
Member

medyagh commented Dec 14, 2021

@afbjorklund currently we have kubeadm images per k8s Minor version, do u think we would need to make a map per Patch version too ?

https://github.com/kubernetes/minikube/blob/master/pkg/minikube/constants/constants_kubeadm_images.go

	KubeadmImages = map[string]map[string]string{
		"v1.23": {
			"coredns/coredns":         "v1.8.6",
			"etcd":                    "3.5.1-0",
			"kube-apiserver":          "v1.22.4",
			"kube-controller-manager": "v1.22.4",
			"kube-proxy":              "v1.22.4",
			"kube-scheduler":          "v1.22.4",
			"pause":                   "3.6",
		},
		"v1.22": {
			"coredns/coredns":         "v1.8.4",
			"etcd":                    "3.5.0-0",
			"kube-apiserver":          "v1.22.4",
			"kube-controller-manager": "v1.22.4",
			"kube-proxy":              "v1.22.4",
			"kube-scheduler":          "v1.22.4",
			"pause":                   "3.5",
		},
...
...
...

@afbjorklund
Copy link
Collaborator

Previously there were two (or more) separate maps, i.e. one for coredns, one for etcd, one for pause

The version for the main kubernetes components was just copied I think, there was no mapping needed ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 14, 2022
@ckannon
Copy link
Contributor

ckannon commented Apr 21, 2022

/assign

@klaases
Copy link
Contributor

klaases commented Apr 27, 2022

Hi @tstromberg, I see that @ckannon is working on this in PR (#14006) and should be resolved soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants