-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build fails with filesystem permission error: failed to write "security.capability" attribute #2201
Comments
got the Dockerfile:
|
Got reproduction in plain Kubernetes (i.e. without Strimzi install I think):
|
If it helps, we experienced the same thing. We added the following config to the runner toml
This has resolved the issue although we still have to run kaniko as root :( |
oh, oh, OH YEAH!
@thurcombe thanks! I guess I should try those one at a time to get a minimal set |
@andreas-ibm glad it helped. We started on this journey after having a hard time with a customer who migrated to OKD4 which by default does not allow containers to run as root, step by step we got there with an additional SCC to permit UID0 and then quickly came across the caps issue. This issue also talks about the required caps #778 but if you want to slim that you might get away with FOWNER and DAC_OVERRIDE only. For anyone that does need to do this in an OCP4/OKD4 cluster then it's a combination of an additional scc to permit UID0 and the required caps and then grant your service account access to the scc and update your toml/securityContext. In an ideal world we would not have to run this as root but that's another story for another day :) |
Been stuck on this issue for a while myself i was able to get around it by adding the flags |
Came here exactly for this. We're trying to build containers via Gitlab Runners on Openshift4, without using Docker-in-docker and root. Eventually we managed using the instructions on https://docs.gitlab.com/ee/ci/docker/using_kaniko.html#building-a-docker-image-with-kaniko and @DerrickKnighton solution. Eventually we settled on this setup, which is pretty reliable for us. Oddly enough, this did not make any change so we ended up not using it: [runners.kubernetes.build_container_security_context.capabilities]
add = ["CHOWN", "SETUID", "SETGID", "FOWNER", "DAC_OVERRIDE"] Current setup looks like: stages:
- build
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
--ignore-path=/usr/bin/newuidmap
--ignore-path=/usr/bin/newgidmap
tags:
- openshift Openshift config.toml: |-
[[runners]]
executor = "kubernetes"
[runners.kubernetes]
[[runners.kubernetes.volumes.empty_dir]]
name = "empty-dir"
mount_path = "/"
medium = "Memory" Sadly enough, as @thurcombe mentiones, we did had to add the anyuid SSC to the gitlab runner service account. The container does not seem to run under root tho, but without that SSC, we still got the |
We have temporarily fixed this by locking our Kaniko version to I opened an issue to document this: |
This is happening with v1.20.0-debug as well. However v1.7.0-debug does work. |
Observed with |
Actual behavior
Error during kaniko executor:
Expected behavior
Unpacking to succeed
To Reproduce
Steps to reproduce the behavior:
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
(yes, you'd technically need an Oracle container in there too, but it didn't get that far so it's not really needed to reproduce)
4. Get logs of build:
kc logs debezium-oracle-cluster-connect-build -f
for the above output.Additional Information
I'd love to, but I don't know enough to provide it. Sorry, I'll continue digging and see if I can append something
Again, this is all I've got at the moment
I suspect this is down to me using CRI-O, but happy to be proven wrong. I was able to build an equivalent image (I think) using podman. I hadn't heard of Kaniko till I hit this issue so I'm still getting up to speed, I suspect to create a standalone reproduction I'd need to replicate the Dockerfile and then find a way to invoke Kaniko using a yaml to plain Kube.
This is related to an Issue i raised with Strimzi earlier today: strimzi/strimzi-kafka-operator#7179 (I wasn't sure if the behaviour was down to how Strimzi was invoking Kaniko or not)
Triage Notes for the Maintainers
--cache
flagThe text was updated successfully, but these errors were encountered: