Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRI: try to use "sudo podman load" instead of "docker load" #2757

Merged
merged 2 commits into from
Jan 30, 2019

Conversation

afbjorklund
Copy link
Collaborator

For some reason crictl still doesn't see the images loaded (so it will just pull them again),
but at least it is not trying to load the cached images into the docker daemon any longer...

Still have to set up /etc/crictl.yaml configuration manually, but that is a different story.
runtime-endpoint: /var/run/crio/crio.sock


$ sudo crictl images
IMAGE                                      TAG                 IMAGE ID            SIZE
docker.io/kubernetes/pause                 latest              f9d5de0795395       251kB
gcr.io/k8s-minikube/storage-provisioner    v1.8.1              4689081edb103       80.8MB
k8s.gcr.io/etcd-amd64                      3.1.12              52920ad46f5bf       193MB
k8s.gcr.io/heapster-amd64                  v1.5.0              86a0ddc3a8c25       75.3MB
k8s.gcr.io/heapster-grafana-amd64          v4.4.3              8cb3de219af7b       155MB
k8s.gcr.io/heapster-influxdb-amd64         v1.3.3              577260d221dbb       12.8MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8              c2ce1ffb51ed6       41.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8              80cc5ea4b547a       50.7MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8              6f7f2dc7fab5d       42.5MB
k8s.gcr.io/kube-addon-manager              v8.6                9c16409588eb1       80.6MB
k8s.gcr.io/kube-apiserver-amd64            v1.10.0             af20925d51a37       225MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.0             ad86dbed15559       148MB
k8s.gcr.io/kube-proxy-amd64                v1.10.0             bfc21aadc7d3e       98.9MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.0             704ba848e69a7       50.6MB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1              e94d2f21bc0c2       121MB
$ sudo podman images
$ 

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 21, 2018
@afbjorklund
Copy link
Collaborator Author

Turns out this is because they have different defaults:

crictl: /mnt/sda1/var/lib/containers
podman: /var/lib/containers/storage

So I guess there is another mandatory config missing ?

@afbjorklund
Copy link
Collaborator Author

The missing config was called "storage.conf", to change the GraphRoot (and driver)

$ cat /etc/containers/storage.conf 
[storage]
driver = "overlay2"
runroot = "/var/run/containers/storage"
graphroot = "/mnt/sda1/var/lib/containers"
$ sudo podman images
REPOSITORY                                 TAG       IMAGE ID       CREATED        SIZE
docker.io/kubernetes/pause                 latest    f9d5de079539   3 years ago    251kB
k8s.gcr.io/kube-scheduler-amd64            v1.10.0   704ba848e69a   3 weeks ago    50.6MB
k8s.gcr.io/kube-apiserver-amd64            v1.10.0   af20925d51a3   3 weeks ago    225MB
k8s.gcr.io/etcd-amd64                      3.1.12    52920ad46f5b   6 weeks ago    193MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.0   ad86dbed1555   3 weeks ago    148MB
k8s.gcr.io/kube-addon-manager              v8.6      9c16409588eb   8 weeks ago    80.6MB
k8s.gcr.io/kube-proxy-amd64                v1.10.0   bfc21aadc7d3   3 weeks ago    98.9MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8    80cc5ea4b547   3 months ago   50.7MB
gcr.io/k8s-minikube/storage-provisioner    v1.8.1    4689081edb10   5 months ago   80.8MB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1    e94d2f21bc0c   4 months ago   121MB
k8s.gcr.io/heapster-influxdb-amd64         v1.3.3    577260d221db   7 months ago   12.8MB
k8s.gcr.io/heapster-amd64                  v1.5.0    86a0ddc3a8c2   4 months ago   75.3MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8    c2ce1ffb51ed   3 months ago   41.2MB
k8s.gcr.io/heapster-grafana-amd64          v4.4.3    8cb3de219af7   7 months ago   155MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8    6f7f2dc7fab5   3 months ago   42.5MB
$ 

Unless also setting the driver, you get an error like:
Could not get runtime: /mnt/sda1/var/lib/containers contains several valid graphdrivers: overlay, overlay2; Please cleanup or explicitly choose storage driver (-s <DRIVER>

@gbraad
Copy link
Contributor

gbraad commented Apr 21, 2018 via email

@afbjorklund
Copy link
Collaborator Author

Probably similar to /var/lib/docker:

https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/package/automount/minikube-automount#L123

But it stills differs on a "storage"... ?

@afbjorklund
Copy link
Collaborator Author

Also, the basics seem to be working. But it still cannot load the cached images from minikube ?

The error reporting is kinda horrible, though. It will just fall back docker/oci/dir, and give up silently.

$ sudo podman pull busybox:latest
Trying to pull docker.io/busybox:latest...Getting image source signatures
Copying blob sha256:f70adabe43c0cccffbae8785406d490e26855b8748fc982d14bc2b20c778b929
 706.22 KB / 706.22 KB [====================================================] 0s
Copying config sha256:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
 1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
$ sudo podman save busybox:latest > busybox_latest
Getting image source signatures
Copying blob sha256:0314be9edf00a925d59f9b88c9d8ccb34447ab677078874d8c14e7a6816e21e1
 1.30 MB / 1.30 MB [========================================================] 0s
Copying config sha256:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
 1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
$ sudo podman rmi busybox:latest 
8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
$ sudo podman load < busybox_latest
Getting image source signatures
Copying blob sha256:0314be9edf00a925d59f9b88c9d8ccb34447ab677078874d8c14e7a6816e21e1
 1.30 MB / 1.30 MB [========================================================] 0s
Copying config sha256:8ac48589692a53a9b8c2d1ceaa6b402665aa7fe667ba51ccc03002300856d8c7
 1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
Loaded image:  docker.io/library/busybox:latest

@afbjorklund
Copy link
Collaborator Author

Seems like the main problem is/was the override in minikube, to use a non-standard directory:

https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/package/crio-bin/crio.service#L17

So it would add the /mnt/sda1 (instead of fixing the bindmount), and use the wrong directory...

--root="": The crio root dir (default: "/var/lib/containers/storage")

@afbjorklund
Copy link
Collaborator Author

But the image loading still fails, so I think that's a bug in podman...

Copying blob sha256:4febd3792a1fb2153108b4fa50161c6ee5e3d16aa483a63215f936a113a88e9a
DEBU[0000] Detected compression format gzip             
DEBU[0000] No compression detected                      
 0 B / 706.12 KB [-------------------------------------------------------------]DEBU[0000] Using original blob without modification     
 1.30 MB / 706.12 KB [======================================================] 0s
Failed
DEBU[0000] parsed reference to refname into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/tmp/busybox_latest:latest" 
Failed
ERRO[0000] error pulling "dir:/tmp/busybox_latest": unable to pull dir:/tmp/busybox_latest 

That was supposed to be "docker.io/busybox:latest", for starters ?

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Apr 21, 2018

At least you can view the images used by CRI-O, even if you can't load them from cache yet:

$ sudo crictl -r /var/run/crio/crio.sock images
IMAGE                                      TAG                 IMAGE ID            SIZE
docker.io/kubernetes/pause                 latest              f9d5de0795395       251kB
k8s.gcr.io/etcd-amd64                      3.1.12              52920ad46f5bf       193MB
k8s.gcr.io/kube-addon-manager              v8.6                9c16409588eb1       80.6MB
k8s.gcr.io/kube-apiserver-amd64            v1.10.0             af20925d51a37       225MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.0             ad86dbed15559       148MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.0             704ba848e69a7       50.6MB
$ sudo podman -s overlay images
WARN[0000] unable to find /etc/containers/registries.conf. some podman (image shortnames) commands may be limited 
REPOSITORY                                 TAG       IMAGE ID       CREATED       SIZE
docker.io/kubernetes/pause                 latest    f9d5de079539   3 years ago   251kB
k8s.gcr.io/kube-apiserver-amd64            v1.10.0   af20925d51a3   3 weeks ago   225MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.0   704ba848e69a   3 weeks ago   50.6MB
k8s.gcr.io/kube-addon-manager              v8.6      9c16409588eb   8 weeks ago   80.6MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.0   ad86dbed1555   3 weeks ago   148MB
k8s.gcr.io/etcd-amd64                      3.1.12    52920ad46f5b   6 weeks ago   193MB
$ 

They seem to disagree a bit on the sorting, and would be nice to avoid the mandatory flags...

Probably should add /etc/containers/registries.conf and /etc/containers/storage.conf

But those files should probably be managed by crio itself, rather than by crictl and podman ?

@afbjorklund
Copy link
Collaborator Author

It seems like podman is not yet stable enough to serve as a replacement/alternative. containers/image#427

But we should probably update kpod anyway, and stop loading the cached images (for crio) into dockerd?

@dlorenc
Copy link
Contributor

dlorenc commented Apr 30, 2018

But we should probably update kpod anyway, and stop loading the cached images (for crio) into dockerd?

Yeah this makes sense. Thanks for spending so much time on this one.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2018
@afbjorklund
Copy link
Collaborator Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 30, 2018
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jul 30, 2018

Tried with podman 0.7.4, but it still doesn't work

UPDATE: Actually podman does seem to work now. It is minikube image caching that doesn't...

$ sudo vi /etc/containers/registries.conf
$ sudo podman pull busybox:latest
Trying to pull registry.access.redhat.com/busybox:latest...Failed
Trying to pull registry.fedoraproject.org/busybox:latest...Failed
Trying to pull docker.io/busybox:latest...Getting image source signatures
Copying blob sha256:75a0e65efd518b9bcac8a8287e5c7032bc81f8cbfbe03271fd049b81ab26119b
 716.01 KB / 716.01 KB [====================================================] 0s
Copying config sha256:22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
 1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
$ sudo podman save busybox:latest > busybox_latest
Getting image source signatures
Copying blob sha256:8e9a7d50b12c4249f7473606c9685f4f4be919a3c00e49a7c3a314ae9de52ed5
 1.31 MB / 1.31 MB [========================================================] 0s
Copying config sha256:22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
 1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
$ sudo podman rmi busybox:latest 
22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
$ sudo podman load < busybox_latest
Getting image source signatures
Copying blob sha256:8e9a7d50b12c4249f7473606c9685f4f4be919a3c00e49a7c3a314ae9de52ed5
 1.31 MB / 1.31 MB [========================================================] 0s
Copying config sha256:22c2dd5ee85dc01136051800684b0bf30016a3082f97093c806152bf43d4e089
 1.46 KB / 1.46 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
Loaded image(s): docker.io/library/busybox:latest

@afbjorklund
Copy link
Collaborator Author

Meant to write that it also works fine loading from a cache file, i.e. docker save | sudo podman load

For some reason the VM runs out of RAM and starts swapping, seems CRI-O needs a gig more ?

@afbjorklund
Copy link
Collaborator Author

After reverting 4b060b2, all images were loaded successfully from cache (using podman).

$ sudo podman images
REPOSITORY                                 TAG       IMAGE ID       CREATED        SIZE
docker.io/library/busybox                  latest    e1ddd7948a1c   39 hours ago   1.38MB
k8s.gcr.io/kube-proxy-amd64                v1.10.0   bfc21aadc7d3   4 months ago   98.9MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.0   ad86dbed1555   4 months ago   148MB
k8s.gcr.io/kube-apiserver-amd64            v1.10.0   af20925d51a3   4 months ago   225MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.0   704ba848e69a   4 months ago   50.6MB
k8s.gcr.io/etcd-amd64                      3.1.12    52920ad46f5b   4 months ago   193MB
k8s.gcr.io/kube-addon-manager              v8.6      9c16409588eb   5 months ago   80.6MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8    c2ce1ffb51ed   6 months ago   41.2MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8    6f7f2dc7fab5   6 months ago   42.5MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8    80cc5ea4b547   6 months ago   50.7MB
k8s.gcr.io/pause-amd64                     3.1       da86e6ba6ca1   7 months ago   746kB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1    e94d2f21bc0c   7 months ago   121MB
gcr.io/k8s-minikube/storage-provisioner    v1.8.1    4689081edb10   8 months ago   80.8MB
Debug log
Waiting for image caching to complete...
Moving files into cluster...
I0802 15:40:43.282442    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I0802 15:40:43.282477    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.10.0
I0802 15:40:43.282496    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/storage-provisioner_v1.8.1
I0802 15:40:43.282507    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-scheduler-amd64_v1.10.0
I0802 15:40:43.282561    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8
I0802 15:40:43.282590    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8
I0802 15:40:43.282630    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.10.0
I0802 15:40:43.282658    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-apiserver-amd64_v1.10.0
I0802 15:40:43.282713    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1
I0802 15:40:43.282742    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/pause-amd64_3.1
I0802 15:40:43.282782    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8
I0802 15:40:43.282811    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/k8s-dns-kube-dns-amd64_1.14.8
I0802 15:40:43.282851    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8
I0802 15:40:43.282888    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/k8s-dns-sidecar-amd64_1.14.8
I0802 15:40:43.282921    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.8.1
I0802 15:40:43.282952    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kubernetes-dashboard-amd64_v1.8.1
I0802 15:40:43.282965    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6
I0802 15:40:43.282993    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-addon-manager_v8.6
I0802 15:40:43.283014    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.1.12
I0802 15:40:43.282442    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.10.0
I0802 15:40:43.283070    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-controller-manager-amd64_v1.10.0
I0802 15:40:43.282462    5622 cache_images.go:202] Loading image from cache at  /home/anders/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.10.0
I0802 15:40:43.283160    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/kube-proxy-amd64_v1.10.0
I0802 15:40:43.283042    5622 ssh_runner.go:57] Run: sudo rm -f /tmp/etcd-amd64_3.1.12
I0802 15:40:43.344699    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.344760    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.344780    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.344792    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.344811    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.344829    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.345011    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.345131    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.345263    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.345280    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.345302    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.345323    5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:40:43.494925    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/pause-amd64_3.1
I0802 15:40:43.803774    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/pause-amd64_3.1
I0802 15:40:43.904898    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache
I0802 15:40:45.058078    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8
I0802 15:40:45.151674    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/k8s-dns-sidecar-amd64_1.14.8
I0802 15:40:45.461909    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-scheduler-amd64_v1.10.0
I0802 15:40:45.560335    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8
I0802 15:40:46.998606    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/storage-provisioner_v1.8.1
I0802 15:40:47.105197    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-addon-manager_v8.6
I0802 15:40:48.430117    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-controller-manager-amd64_v1.10.0
I0802 15:40:48.573310    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-proxy-amd64_v1.10.0
I0802 15:40:48.808047    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kube-apiserver-amd64_v1.10.0
I0802 15:40:49.377031    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/kubernetes-dashboard-amd64_v1.8.1
I0802 15:40:50.108506    5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/etcd-amd64_3.1.12
I0802 15:40:55.985915    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8
I0802 15:40:56.040480    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache
I0802 15:40:56.283312    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8
I0802 15:40:56.344541    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache
I0802 15:41:01.729975    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.10.0
I0802 15:41:01.792491    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.10.0 from cache
I0802 15:41:01.887907    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8
I0802 15:41:01.948482    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache
I0802 15:41:02.088075    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1
I0802 15:41:02.156475    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache
I0802 15:41:11.748274    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.8.1
I0802 15:41:11.820496    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.8.1 from cache
I0802 15:41:15.078819    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.10.0
I0802 15:41:15.132502    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.10.0 from cache
I0802 15:41:18.758481    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.10.0
I0802 15:41:18.812536    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.10.0 from cache
I0802 15:41:18.857337    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.10.0
I0802 15:41:18.900535    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.10.0 from cache
I0802 15:41:19.897747    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6
I0802 15:41:19.944727    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache
I0802 15:41:19.944727    5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/etcd-amd64_3.1.12
I0802 15:41:19.992492    5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.1.12 from cache
I0802 15:41:19.992531    5622 cache_images.go:107] Successfully loaded all cached images.

Loading cached images from config file.
I0802 15:43:30.112379 5622 cache_images.go:305] Attempting to cache image: busybox:latest at /home/anders/.minikube/cache/images/busybox_latest
I0802 15:43:30.112569 5622 cache_images.go:82] Successfully cached all images.
I0802 15:43:30.136011 5622 cache_images.go:202] Loading image from cache at /home/anders/.minikube/cache/images/busybox_latest
I0802 15:43:30.136258 5622 ssh_runner.go:57] Run: sudo rm -f /tmp/busybox_latest
I0802 15:43:30.220601 5622 ssh_runner.go:57] Run: sudo mkdir -p /tmp
I0802 15:43:30.324473 5622 ssh_runner.go:57] Run: sudo podman load -i /tmp/busybox_latest
I0802 15:43:30.584480 5622 ssh_runner.go:57] Run: sudo rm -rf /tmp/busybox_latest
I0802 15:43:30.628517 5622 cache_images.go:233] Successfully loaded image /home/anders/.minikube/cache/images/busybox_latest from cache
I0802 15:43:30.628550 5622 cache_images.go:107] Successfully loaded all cached images.

Now, just have to stop the docker daemon from starting at all, during minikube boottime...

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

@afbjorklund
Copy link
Collaborator Author

Opened #3042 for the broken caching, since it now hangs forever on start.
Waiting for image caching to complete...

@afbjorklund afbjorklund changed the title [WIP] CRI: try to use "podman load" instead of "docker load" CRI: try to use "podman load" instead of "docker load" Aug 3, 2018
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 3, 2018
@afbjorklund afbjorklund changed the title CRI: try to use "podman load" instead of "docker load" CRI: try to use "sudo podman load" instead of "docker load" Aug 3, 2018
@afbjorklund
Copy link
Collaborator Author

It is "normal" for podman to not see the containers from crictl, at least for now (until libpod is merged)

https://blog.openshift.com/crictl-vs-podman/

Note: Currently Podman and CRI-O do NOT share the same library for identifying containers, yet. This means, Podman cannot list containers created by CRI-O and CRI-O/Crictl does not know about containers created by Podman. We plan on fixing this in the future when we merge libpod (Podman’s container management library) into CRI-O.

It does see the images, though, if you configure both to use the same endpoint/storage configuration.

@afbjorklund
Copy link
Collaborator Author

This returned some weird podman errors, due to running out of memory for doing all-at-once.

@afbjorklund
Copy link
Collaborator Author

After the final podman update, and the fixes for Kubernetes v1.11.0, now CRI seems to working OK again.

System Info:
 Machine ID:                 7827995d9f4a4163bb6b3467c4e13031
 System UUID:                7827995D-9F4A-4163-BB6B-3467C4E13031
 Boot ID:                    8a2aae5e-6f34-466f-900e-3eb7afab2775
 Kernel Version:             4.15.0
 OS Image:                   Buildroot 2018.05
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  cri-o://1.10.0
 Kubelet Version:            v1.11.0
 Kube-Proxy Version:         v1.11.0

I ran a manual sudo systemctl stop docker, before the ISO can be convinced to not start it (#3068)

                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ sudo systemctl stop docker
$ sudo systemctl stop rkt-api
$ sudo systemctl stop rkt-metadata

@afbjorklund
Copy link
Collaborator Author

It seems like podman and crictl still argue a bit about images, and crictl.yaml is still missing (#3043)

$ sudo podman images
REPOSITORY                                 TAG       IMAGE ID       CREATED         SIZE
docker.io/library/busybox                  latest    e1ddd7948a1c   7 weeks ago     1.38MB
k8s.gcr.io/kube-proxy-amd64                v1.11.0   1d3d7afd77d1   2 months ago    99.6MB
k8s.gcr.io/kube-controller-manager-amd64   v1.11.0   55b70b420785   2 months ago    155MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.0   214c48e87f58   2 months ago    187MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.0   0e4a34a3b0e6   2 months ago    57MB
k8s.gcr.io/etcd-amd64                      3.1.12    52920ad46f5b   6 months ago    193MB
k8s.gcr.io/kube-addon-manager              v8.6      9c16409588eb   7 months ago    80.6MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8    c2ce1ffb51ed   8 months ago    41.2MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8    6f7f2dc7fab5   8 months ago    42.5MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8    80cc5ea4b547   8 months ago    50.7MB
k8s.gcr.io/pause-amd64                     3.1       da86e6ba6ca1   9 months ago    746kB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1    e94d2f21bc0c   9 months ago    121MB
gcr.io/k8s-minikube/storage-provisioner    v1.8.1    4689081edb10   10 months ago   80.8MB
$ sudo crictl -r unix:///var/run/crio/crio.sock images
IMAGE                                      TAG                 IMAGE ID            SIZE
docker.io/kubernetes/pause                 latest              f9d5de0795395       251kB
gcr.io/k8s-minikube/storage-provisioner    v1.8.1              4689081edb103       80.8MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97cb       46.2MB
k8s.gcr.io/coredns                         1.2.2               367cdc8433a45       39.5MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be23       219MB
k8s.gcr.io/heapster-amd64                  v1.5.3              f57c75cd7b0aa       75.3MB
k8s.gcr.io/heapster-grafana-amd64          v4.4.3              8cb3de219af7b       155MB
k8s.gcr.io/heapster-influxdb-amd64         v1.3.3              577260d221dbb       12.8MB
k8s.gcr.io/kube-addon-manager              v8.6                9c16409588eb1       80.6MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.0             214c48e87f58f       187MB
k8s.gcr.io/kube-controller-manager-amd64   v1.11.0             55b70b420785d       155MB
k8s.gcr.io/kube-proxy-amd64                v1.11.0             1d3d7afd77d13       99.6MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.0             0e4a34a3b0e6f       57MB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1              e94d2f21bc0c2       121MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca19       747kB

Different etcd versions ? But at least it is not trying to load the cached images into docker anymore...

@tstromberg
Copy link
Contributor

@minikube-bot OK to test

@tstromberg
Copy link
Contributor

@minikube-bot OK to test

@tstromberg
Copy link
Contributor

Do you mind merging against master? The test results look really old.

@afbjorklund
Copy link
Collaborator Author

@tstromberg : Will do, it has been lying for quite some time. (i.e. it was against k8s 1.10)

Should be smaller now with podman updated, but the config handling will need a rewrite...

This one has conflicts with 5d910e8 and ae9f4b2

It moved the loadConfig, but didn't move the saveConfig.

cluster.LoadConfigFromFile(viper.GetString(config.MachineProfile))

is now instead cfg.Load() - probably should move save there too ?

@afbjorklund afbjorklund self-assigned this Jan 25, 2019
To not run out of memory due to loading the images, since it
does not have a daemon to serialize things (like Docker does)
@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 25, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: afbjorklund

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 25, 2019
}

if crio {
podmanLoad.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type of failure is this lock trying to prevent?

Unless it's something specifically terrible, I'd prefer as little state as possible here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per the commit message, it was running out of memory in the VM when running all the "load" commands in parallel. With docker, things are serialized / queued in the docker daemon. But with podman, it will actually try to run all of them at once. So I had to introduce a lock, for the command to succeed. It's mostly I/O-bound anyway, so you don't lose too much time by it.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jan 28, 2019

@tstromberg : is something more needed here ?
(for --cache-images with --container-runtime=cri-o)

@tstromberg
Copy link
Contributor

@minikube-bot OK to test

@tstromberg tstromberg merged commit 8d304ee into kubernetes:master Jan 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants