-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRI: try to use "sudo podman load" instead of "docker load" #2757
Conversation
Turns out this is because they have different defaults:
So I guess there is another mandatory config missing ? |
The missing config was called "storage.conf", to change the GraphRoot (and driver) $ cat /etc/containers/storage.conf
[storage]
driver = "overlay2"
runroot = "/var/run/containers/storage"
graphroot = "/mnt/sda1/var/lib/containers"
$ sudo podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/kubernetes/pause latest f9d5de079539 3 years ago 251kB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 3 weeks ago 50.6MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 3 weeks ago 225MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 weeks ago 193MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 3 weeks ago 148MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 8 weeks ago 80.6MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 weeks ago 98.9MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 3 months ago 50.7MB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 5 months ago 80.8MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.1 e94d2f21bc0c 4 months ago 121MB
k8s.gcr.io/heapster-influxdb-amd64 v1.3.3 577260d221db 7 months ago 12.8MB
k8s.gcr.io/heapster-amd64 v1.5.0 86a0ddc3a8c2 4 months ago 75.3MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 3 months ago 41.2MB
k8s.gcr.io/heapster-grafana-amd64 v4.4.3 8cb3de219af7 7 months ago 155MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 3 months ago 42.5MB
$ Unless also setting the driver, you get an error like: |
These locations need to be bind mounted in the iso on startup
… crictl: /mnt/sda1/var/lib/containers
podman: /var/lib/containers/storage
--
Gerard Braad | http://gbraad.nl
[ Doing Open Source Matters ]
|
Probably similar to But it stills differs on a "storage"... ? |
Also, the basics seem to be working. But it still cannot load the cached images from minikube ? The error reporting is kinda horrible, though. It will just fall back docker/oci/dir, and give up silently. $ sudo podman pull busybox:latest
Trying to pull docker.io/busybox:latest...Getting image source signatures
Copying blob sha256:f70adabe43c0cccffbae8785406d490e26855b8748fc982d14bc2b20c778b929
706.22 KB / 706.22 KB [====================================================] 0s
Copying config sha256:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
$ sudo podman save busybox:latest > busybox_latest
Getting image source signatures
Copying blob sha256:0314be9edf00a925d59f9b88c9d8ccb34447ab677078874d8c14e7a6816e21e1
1.30 MB / 1.30 MB [========================================================] 0s
Copying config sha256:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
$ sudo podman rmi busybox:latest
8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
$ sudo podman load < busybox_latest
Getting image source signatures
Copying blob sha256:0314be9edf00a925d59f9b88c9d8ccb34447ab677078874d8c14e7a6816e21e1
1.30 MB / 1.30 MB [========================================================] 0s
Copying config sha256:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
Loaded image: docker.io/library/busybox:latest |
Seems like the main problem is/was the override in minikube, to use a non-standard directory: So it would add the
|
But the image loading still fails, so I think that's a bug in podman...
That was supposed to be "docker.io/busybox:latest", for starters ? |
At least you can view the images used by CRI-O, even if you can't load them from cache yet: $ sudo crictl -r /var/run/crio/crio.sock images
IMAGE TAG IMAGE ID SIZE
docker.io/kubernetes/pause latest f9d5de0795395 251kB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5bf 193MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb1 80.6MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a37 225MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed15559 148MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a7 50.6MB
$ sudo podman -s overlay images
WARN[0000] unable to find /etc/containers/registries.conf. some podman (image shortnames) commands may be limited
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/kubernetes/pause latest f9d5de079539 3 years ago 251kB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 3 weeks ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 3 weeks ago 50.6MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 8 weeks ago 80.6MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 3 weeks ago 148MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 weeks ago 193MB
$ They seem to disagree a bit on the sorting, and would be nice to avoid the mandatory flags... Probably should add But those files should probably be managed by crio itself, rather than by crictl and podman ? |
It seems like But we should probably update |
Yeah this makes sense. Thanks for spending so much time on this one. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Tried with podman 0.7.4, but it still doesn't work UPDATE: Actually podman does seem to work now. It is minikube image caching that doesn't... $ sudo vi /etc/containers/registries.conf
$ sudo podman pull busybox:latest
Trying to pull registry.access.redhat.com/busybox:latest...Failed
Trying to pull registry.fedoraproject.org/busybox:latest...Failed
Trying to pull docker.io/busybox:latest...Getting image source signatures
Copying blob sha256:75a0e65efd518b9bcac8a8287e5c7032bc81f8cbfbe03271fd049b81ab26119b
716.01 KB / 716.01 KB [====================================================] 0s
Copying config sha256:22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
$ sudo podman save busybox:latest > busybox_latest
Getting image source signatures
Copying blob sha256:8e9a7d50b12c4249f7473606c9685f4f4be919a3c00e49a7c3a314ae9de52ed5
1.31 MB / 1.31 MB [========================================================] 0s
Copying config sha256:22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
$ sudo podman rmi busybox:latest
22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
$ sudo podman load < busybox_latest
Getting image source signatures
Copying blob sha256:8e9a7d50b12c4249f7473606c9685f4f4be919a3c00e49a7c3a314ae9de52ed5
1.31 MB / 1.31 MB [========================================================] 0s
Copying config sha256:22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
Loaded image(s): docker.io/library/busybox:latest
|
Meant to write that it also works fine loading from a cache file, i.e. docker save | sudo podman load For some reason the VM runs out of RAM and starts swapping, seems CRI-O needs a gig more ? |
After reverting 4b060b2, all images were loaded successfully from cache (using podman).
Debug logWaiting for image caching to complete... Moving files into cluster... I0802 15:40:43.282442 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 I0802 15:40:43.282477 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.10.0 I0802 15:40:43.282496 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/storage-provisioner_v1.8.1 I0802 15:40:43.282507 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-scheduler-amd64_v1.10.0 I0802 15:40:43.282561 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 I0802 15:40:43.282590 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 I0802 15:40:43.282630 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.10.0 I0802 15:40:43.282658 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-apiserver-amd64_v1.10.0 I0802 15:40:43.282713 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 I0802 15:40:43.282742 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/pause-amd64_3.1 I0802 15:40:43.282782 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 I0802 15:40:43.282811 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/k8s-dns-kube-dns-amd64_1.14.8 I0802 15:40:43.282851 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 I0802 15:40:43.282888 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/k8s-dns-sidecar-amd64_1.14.8 I0802 15:40:43.282921 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.8.1 I0802 15:40:43.282952 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kubernetes-dashboard-amd64_v1.8.1 I0802 15:40:43.282965 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 I0802 15:40:43.282993 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-addon-manager_v8.6 I0802 15:40:43.283014 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.1.12 I0802 15:40:43.282442 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.10.0 I0802 15:40:43.283070 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-controller-manager-amd64_v1.10.0 I0802 15:40:43.282462 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.10.0 I0802 15:40:43.283160 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-proxy-amd64_v1.10.0 I0802 15:40:43.283042 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/etcd-amd64_3.1.12 I0802 15:40:43.344699 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.344760 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.344780 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.344792 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.344811 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.344829 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.345011 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.345131 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.345263 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.345280 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.345302 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.345323 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp I0802 15:40:43.494925 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/pause-amd64_3.1 I0802 15:40:43.803774 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/pause-amd64_3.1 I0802 15:40:43.904898 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache I0802 15:40:45.058078 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 I0802 15:40:45.151674 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/k8s-dns-sidecar-amd64_1.14.8 I0802 15:40:45.461909 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-scheduler-amd64_v1.10.0 I0802 15:40:45.560335 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8 I0802 15:40:46.998606 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/storage-provisioner_v1.8.1 I0802 15:40:47.105197 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-addon-manager_v8.6 I0802 15:40:48.430117 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-controller-manager-amd64_v1.10.0 I0802 15:40:48.573310 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-proxy-amd64_v1.10.0 I0802 15:40:48.808047 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-apiserver-amd64_v1.10.0 I0802 15:40:49.377031 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kubernetes-dashboard-amd64_v1.8.1 I0802 15:40:50.108506 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/etcd-amd64_3.1.12 I0802 15:40:55.985915 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 I0802 15:40:56.040480 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache I0802 15:40:56.283312 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 I0802 15:40:56.344541 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache I0802 15:41:01.729975 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.10.0 I0802 15:41:01.792491 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.10.0 from cache I0802 15:41:01.887907 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 I0802 15:41:01.948482 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache I0802 15:41:02.088075 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1 I0802 15:41:02.156475 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache I0802 15:41:11.748274 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.8.1 I0802 15:41:11.820496 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.8.1 from cache I0802 15:41:15.078819 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.10.0 I0802 15:41:15.132502 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.10.0 from cache I0802 15:41:18.758481 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.10.0 I0802 15:41:18.812536 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.10.0 from cache I0802 15:41:18.857337 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.10.0 I0802 15:41:18.900535 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.10.0 from cache I0802 15:41:19.897747 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6 I0802 15:41:19.944727 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache I0802 15:41:19.944727 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/etcd-amd64_3.1.12 I0802 15:41:19.992492 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.1.12 from cache I0802 15:41:19.992531 5622 cache_images.go:107] Successfully loaded all cached images. Now, just have to stop the docker daemon from starting at all, during minikube boottime...
|
Opened #3042 for the broken caching, since it now hangs forever on start. |
It is "normal" for podman to not see the containers from crictl, at least for now (until libpod is merged) https://blog.openshift.com/crictl-vs-podman/
It does see the images, though, if you configure both to use the same endpoint/storage configuration. |
This returned some weird |
After the final podman update, and the fixes for Kubernetes v1.11.0, now CRI seems to working OK again.
I ran a manual
|
It seems like podman and crictl still argue a bit about images, and crictl.yaml is still missing (#3043)
Different etcd versions ? But at least it is not trying to load the cached images into docker anymore... |
@minikube-bot OK to test |
@minikube-bot OK to test |
Do you mind merging against master? The test results look really old. |
@tstromberg : Will do, it has been lying for quite some time. (i.e. it was against k8s 1.10) Should be smaller now with podman updated, but the config handling will need a rewrite... This one has conflicts with 5d910e8 and ae9f4b2 It moved the loadConfig, but didn't move the saveConfig.
is now instead |
To not run out of memory due to loading the images, since it does not have a daemon to serialize things (like Docker does)
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: afbjorklund The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
} | ||
|
||
if crio { | ||
podmanLoad.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What type of failure is this lock trying to prevent?
Unless it's something specifically terrible, I'd prefer as little state as possible here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per the commit message, it was running out of memory in the VM when running all the "load" commands in parallel. With docker, things are serialized / queued in the docker daemon. But with podman, it will actually try to run all of them at once. So I had to introduce a lock, for the command to succeed. It's mostly I/O-bound anyway, so you don't lose too much time by it.
@tstromberg : is something more needed here ? |
@minikube-bot OK to test |
For some reason crictl still doesn't see the images loaded (so it will just pull them again),
but at least it is not trying to load the cached images into the docker daemon any longer...
Still have to set up /etc/crictl.yaml configuration manually, but that is a different story.
runtime-endpoint: /var/run/crio/crio.sock