-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kaniko seems to create full file system snapshots after each stage, leading to failed Gitlab CI pipeline. #2444
Comments
Have you tried with |
Yes, that was one of the first things I tried. For the original build it did not make a difference. I have not tried it again with the dummy build since I assumed it would only reduce the impact but not solve the underlying problem. |
I just tested the |
The gitlab shared runners run on a VM with only 4GB of memory. This is the cause for your crash. Try a larger runner VM: https://docs.gitlab.com/ee/ci/runners/saas/linux_saas_runner.html#machine-types-available-for-private-projects-x86-64 |
If you are using private runners, please post the memory limit configured for the gitlab build pod. It should be in your gitlab runner config.toml file (hopefully in your helm values.yaml file) |
The test uses the python:3.11 image, which has a size of 340.88 MB. When run locally with docker, the resulting image has the same size. I would expect the test to work with 4GB memory. |
I've faced the same issue for my build on kubernetes. I'm using a git context and kaniko is using a lot of memory that the build job gets OOMKilled. I tried adding The logs are below:
|
Hello everyone! I found solution here https://stackoverflow.com/questions/67748472/can-kaniko-take-snapshots-by-each-stage-not-each-run-or-copy-operation adding option to kaniko --single-snapshot
|
|
Yep, indeed |
I can confirm that too! Additionally, I found that while the conda environment is retrieved from cache, the pip environment is not - even when it's unchanged:
|
I am trying to use Kaniko to build a multi-stage image within a Gitlab-CI pipeline. The pipeline crashes with the following, rather unhelpful message:
ERROR: Job failed: pod "runner-<id>" status is "Failed"
right after Kaniko logsTaking snapshot of full filesystem...
.The reason seems to be that the memory limit of the gitlab-pod is reached at some point and kubernetes auto-kills it. However, if built locally using regular docker, the resulting image is ~4GB and the gitlab-pod's memory limit should be much higher. This got me thinking and I created the following dockerfile to debug the problem:
When I build the image locally, the resulting size is exactly that of the base image. Docker takes roughly a minute to reach level 100. Using kaniko, the build fails after ~11 minutes with the aforementioned error while taking the snapshot of dummy_stage_47.
The following parameters were used for the test:
I guess Kaniko really creates snapshots of the full filesystem after each stage, which results in a huge memory consumption. Is this the expected behavior?
The text was updated successfully, but these errors were encountered: