-
Notifications
You must be signed in to change notification settings - Fork 5.1k
optimize memory usage in image load with Docker client integration #21103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can one of the admins verify this patch? |
|
HI @medyagh |
|
@yankay So with this we don't need to save the image before loading? This will not help the case when we ned to load into multiple clusters: Where the image is saved? how is it cleaned up? Regardless, loading entire image to memory does not make sense in any case, so this looks like a good change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you @yankay do you mind putting a Before/After this PR in the description ? to proof the effectiveness ?
|
/ok-to-test |
This comment has been minimized.
This comment has been minimized.
HI @nirs The image is saved at |
Thanks @medyagh Memory usage has indeed been reduced, but the execution time has increased significantly. I'll investigate the cause. This resulted in multiple repeated So the PR is changed with another solution by using the docker client directly. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
|
kvm2 driver with docker runtime Times for minikube start: 54.2s 53.0s 52.8s 52.7s 53.2s Times for minikube ingress: 15.6s 16.1s 14.6s 15.2s 16.1s docker driver with docker runtime Times for minikube (PR 21103) start: 24.6s 23.6s 23.0s 24.5s 26.9s Times for minikube ingress: 13.3s 13.8s 12.3s 12.3s 13.8s docker driver with containerd runtime Times for minikube start: 20.9s 22.6s 21.3s 24.4s 22.6s Times for minikube ingress: 22.8s 22.8s 22.8s 22.9s 38.8s |
|
@yankay Results looks very good! Can you compare with the split process? We still have the issue of cleanup - keeping the image in the cache directory is not good, the user does not know that the image was added and nobody will clean it up. But I think this is not an issue in your change and it was already there. |
Thanks, the performance information has been added to the PR description :-) |
|
Thank you very much @yankay for providing the benchmarks before/after this PR sounds amazing, I look forward to see more contributions from you |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: medyagh, yankay The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@medyagh this broke image load on macOS: % time minikube image load docker.io/library/alpine:latest -p c1
❌ Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/Users/nir/.minikube/cache/images/arm64/docker.io/library/alpine_latest": docker save: inspect image docker.io/library/alpine:latest via docker client: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ Please also attach the following file to the GitHub issue: │
│ - /var/folders/v1/lcvpk6v567x43cfsjqg_97t40000gn/T/minikube_image_dda93f623f5c27f0f7578776de8bc3f360c78630_0.log │
│ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
minikube image load docker.io/library/alpine:latest -p c1 0.06s user 0.04s system 4% cpu 2.054 totalI guess this change introduces dependency on docker. Before this change we did not assume docker or podman client. We don't have automated tests on macOS so this is not easy to test. To ensure we don't have regressions all changes must be tested on macOS. |
|
Before this change: |
|
I'm sorry to trouble everyone. 😥 |
…ubernetes#21103) Signed-off-by: Kay Yan <kay.yan@daocloud.io>
…ubernetes#21103) Signed-off-by: Kay Yan <kay.yan@daocloud.io>
Fix: #17785
Because bufferedOpener attempts to load the entire layer into memory, this issue occurs. See https://github.com/google/go-containerregistry/blob/main/pkg/v1/daemon/image.go#L71
This PR addresses high memory consumption issues during minikube image load operations by using docker client directly.
Comparative Analysis: Minikube Image Load Performance
Performance Metrics Summary
minikube image load)./out/minikube-linux-amd64)docker save+minikube image load)