Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kaniko - Reduces requirements for CI privileges to build images and may give a performance boost #1225

Closed
consideRatio opened this issue Apr 9, 2019 · 5 comments

Comments

@consideRatio
Copy link
Member

This may be of relevance to us, we can build docker images using Kaniko instead of using Docker. Using Docker comes with a lot of complexities, especially if done from within a docker container already. Using Kaniko, we avoid some performance penalties and the need for escalated privileges.

Reference posts

@manics
Copy link
Member

manics commented Apr 9, 2019

Would this mean binderhub no-longer needs to mount /var/run/docker.sock and could do everything through the k8s API?

@consideRatio
Copy link
Member Author

consideRatio commented Apr 9, 2019

@manics yes maybe - excellent consideration! I did not realize the big use of docker stuff in use by BinderHub and how it could impact that.

I could use a clarification what you meant with "do everything through the k8s API"?

What we would be able to mostly relates the use of docker within our own code. Like replacing the use of docker to build and publish images, which escalates permission requirements, as well as perhaps also gain an performance improvement by using Kaniko instead of docker to pull dependent images and cache, build, and publish images.

So, Kaniko would only replace the use of docker within our own code, but not how the k8s api would choose to pull images for use by pods etc.

/cc: @jupyterhub/binder-team

@manics
Copy link
Member

manics commented Apr 9, 2019

I should've worded it better... do all container work though an API (K8s for managing containers, docker registry API for pushing images).

I think it could lead to a cleaner Binderhub deployment. K8s provides a remote API which in principle is agnostic to the backend implementation. Directly using /var/run/docker.sock breaks the abstraction, you're now dependent on how Kubernetes and Docker are deployed. But this is all long term "nice to have" stuff 😃.

@clkao
Copy link
Contributor

clkao commented Apr 9, 2019

Another option is to use buildah. buildah --storage-driver vfs bud should be compatible with docker build, even within docker.

@consideRatio
Copy link
Member Author

I've not been very happy about using Kaniko so far, which I tried doing for a k8s based gitlab-ci worker to avoid privilege escalation. But, it came with some downsides such as very long build times and certain dockerfiles could not be built with Kaniko either as not all statements in the Dockerfile was supported the same way as when using docker.

ref: GoogleContainerTools/kaniko#875

I'll go ahead and close this at this point.

@clkao, what is your experience of buildah at this point btw?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants