-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Kaniko in another Container #1757
Comments
From playing around with Kaniko, I can confirm that adding the binaries into another container does work, at least for a simple Docker build. Here is the Dockerfile I am working with:
Using that Dockerfile, I was able to build and publish an image to DockerHub. @tejal29 Any idea on why this isn't a support use case of Kaniko? |
Hey @james-crowley -- glad it worked for you. The reason we say that is because there are additional files needed for kaniko to work, so on its own the binary will not work. Since you copied those in, it seems fine. There is also a slight risk that files in the new image may end up in the image you are trying to build, which aren't supposed to be there. kaniko knows to exclude adding volume mounts and anything in the |
@priyawadhwa Thanks for the quick response. Seems like we got passed the first blocker in terms of getting all the files kaniko needs to work.
This seems concerning. Is there any open bugs/issues for this? Is kaniko can ignore the |
I believe the answer to both these questions is no -- is there a reason you need to use the other base image? Would it be feasible for you to move the files you need from that image into the |
CircleCI has a self hosted runner. You are able to run the self hosted runner in a couple different ways. Both the Docker and Kubernetes offerings need the ability to build Docker images. I wanted to extend the runner image with Kaniko to add the functionality for users to build Docker images while having the runner installed on Kubernetes. I could use Kaniko's base image, I can shift the runner agent config files to be inside of the Why does kaniko not exclude files outside the build context of where the image is being built? Normally, if we use Docker to build an image we can define a build context and limit the scope of what the build can see/utilize. |
My understanding is that this is not supported (running kaniko in another docker container) because kaniko unpacks base image into / , so that will cause files and directories in / of image that runs kaniko, to be overwritten with unpacked files and directories. And this can lead to unexpected results. |
@james-crowley Have you tried to build multi-stage dockerfile with your image? |
@meskill |
@vladaurosh Thanks for pointing that out. Is there any workaround for this? As I understand we can put every required tool in the protected directory |
@meskill I have not tried a multi-stage Dockerfile. Do you have an example you want me to try. As far files being included into the built Docker image, I am not seeing that be the case. When I built my simple nginx test Dockerfile, none of the additional files I added to the base Docker container were added. As @priyawadhwa mentioned, this might have been the case at some point but my testing shows, at least for my example, no additional files were added to the built Docker image. |
@james-crowley Simple build like this FROM node:16 as build
FROM build
WORKDIR /app
RUN npm --version
CMD [ "node" ] After a success build try to call any command inside your container: curl, git etc. |
@meskill On the other hand, I was able to build docker image based on alpine, but I guess that was because running container was based on alpine as well. Personally, I think this is not reliable even with single stage dockerfiles. There were couple of similar discussions here, and suggestion to make kaniko do its work in some temp directory, not / . But I guess that will require a lot of changes, and having in mind that development of kaniko is slow lately, who knows when and if that will happen. As for workaround, for me it works when using proot (chroot alternative) to create chroot-ed environment where kaniko will unpack base image(s). So far, it works good even with multistage Dockerfiles. But it requires container to be started with SYS_PTRACE capability. |
I've used a custom Kaniko image for years, built off the @priyawadhwa I'm interested if Kaniko could be updated to work correctly on CI/CD systems where the runners are containers expected to run the whole workflow (e.g. GitHub actions) without DinD? |
@stevehipwell You can already use kaniko in GitLab CI with the Docker executor without DinD. I'm not sure what you mean by "runners are containers expected to run the whole workflow"? |
@dHannasch I know it works fine in GitLab, that was my example of a working cloud native solution (I've been using it for years). What I want is to be able to use Kaniko in a GitHub Actions self hosted runner running on Kubernetes. Unlike the GitLab runner which is an orchestrator that creates containers to run stages, the GitHub self hosted runner is a container that is expected to run a whole workflow; as it stands without DinD this can't run Kaniko. |
I want same thing: add kaniko to other docker image. For now I can successfully build multistage builds inside other docker container using chroot+kaniko binaries. tree kaniko -a
kaniko
├── .docker
│ └── config.json
├── docker-credential-ecr-login
├── docker-credential-gcr
└── executor and created bash-script # cat kaniko-build
#!/bin/bash
# dockerfile path relatively to current directory
dockerfile="$1"
#
destination="$2"
context="$(dirname ${dockerfile})"
# prepare chroot
mkdir workdir
cp -r kaniko workdir
# assuming you have .docker/config.json inside kaniko directory
export DOCKER_CONFIG=/kaniko/.docker/
mkdir -p workdir/kaniko/workspace
cd workdir
mkdir dev
mknod -m 666 dev/null c 1 3
mknod -m 666 dev/zero c 1 5
mkdir -p proc/self
cp /proc/self/mountinfo proc/self/
mkdir etc
cp /etc/resolv.conf etc/
cp /etc/nsswitch.conf etc
mkdir -p etc/ssl/certs
cat /etc/ssl/certs/* > etc/ssl/certs/sa-certificates.crt
cp -r "../${context}/." kaniko/workspace
# or make hardlink to each file if on same fs to speedup this (and same trees)
chroot . ./kaniko/executor -f /kaniko/workspace/Dockerfile --context=/kaniko/workspace/ --force --destination="$destination" --cleanup usage: let's assume you have directory with lot of projects with dockerfiles inside it and kaniko binaries near. you already build it using some external tools like sbt/etc and want just assemble docker containers # ls
project1 project2 project3 kaniko kaniko-build build: ./kaniko-build project1/Dockerfile repo/name:0.0.1 For me this works so this poped up for me question from #107: why not chroot? Loks like it don't need full content of proc,dev,sys directories. |
I can confirm this, it seems that a lot of files which are contained in my custom builder image also get copied into the target image - which is very concerning and shouldn't happen in my opinion. However I doubt we'll receive any official support on this because its an edge case and not recommended. |
Can this be done when using Kaniko? That is, use Kaniko to use Kaniko. I think not. I just tried it and it fails at the first copy command with the following error:
It doesn't matter which version of the image is used but rather the names of the executable. This means that one needs to use something other than Kaniko to build an image that uses Kaniko executor (probably the other The reason I need a custom Kaniko image is the poor integration with GitLab. Currently I am unable to find a way to build an image using Kaniko from a repo that has submodules, requiring credentials. For that I need to manual do the credentials setup as well as the cloning. All of this requires calling |
Through trial and error, I found that the error occurs when I copy from the kaniko image to the same path that is This is the failed version.
Got this error in the GitLab:
I tried to build an image (above Dockerfile) by the kaniko image on my laptop and after 2nd time I built I noticed that the I tried with this command and run
So my workaround is to copy the
The job in the
|
Using this same path for my
Any suggestion on how to circumvent that issue would be much appreciated. |
All I can recommend is to use the One last thing to check is whether you are pushing to the right repo. Using a group or personal token you can also push to other repos. But you cannot PUSH to a group container registry as far as a know. All the images there are accumulated from all the projects in that group. I do believe you can do a PULL. |
In the
README
it states,kaniko/README.md
Line 14 in 7e3954a
Is there any reason for this statement? I am not sure why copying the compiled binaries and correct folders would not cause Kaniko to not work.
@priyawadhwa I saw you made this addition a while back. Do you have an insights on to why this might not work?
I was hoping to use Kaniko in another Docker container with out having to extend
gcr.io/kaniko-project/executor
.The text was updated successfully, but these errors were encountered: