-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Directories provisioned by hostPath provisioner are only writeable by root #1990
Comments
Perhaps an annotation for the PVC that sets ownership? Or mode? |
According to
|
Am convinced now this is because the process runs by default with umask 022, and so 0777 gets set as 0755 instead. We could drop umask to 0000 just before this call and then restore it afterwards |
What about doing something like this: where we'd set the permissions via a call to chmod after creation, rather than setting/resetting the process-level umask? |
Thanks for figuring this out, by the way! |
That works too, and might be better than fiddling with umask (since umask is process wide afaik)! I'll amend the patch later today. |
@dlorenc np, and thanks for reviewing the PR so quickly! |
This was supposed to fix the
With the following
and the following PVC template
the host directories are still created with the following mode |
I'm seeing the same issue running |
@dlorenc @yuvipanda would it be possible to reopen this issue? |
I suspect I've bumped into the same issue seen by @yuvipanda. The process I followed is slightly different but the end results are the same: a hostPath volume is created in
What happened:
Ultimately the Redis pod failed to startup. The pod logs contained something like this:
The Docker image used by the Redis Helm chart launches the Redis daemon as uid The Redis pod uses a persistent volume that ultimately maps to a directory on the minikube VM that is created with permissions
If I I don't know what the best option for a fix would be - although I'm not sure this is a bug. There has been a fair amount of debate in other issues (see kubernetes/kubernetes#2630, kubernetes/charts#976, and others) that makes me hesitant to advocate for a Allowing some customisation of |
The issue is that the volume provisioner isn't really responsible for the mounted volume's permissions, kubelet is. This same problem exists basically for all externalVolume provisioners that don't have a mounter implementation in core. Local volumes I think are the only supported volume type that's got a provisioner outside of core, but has a mounter implemented in core. I don't know what the best option is, but it seems that if local volumes get better support, then perhaps minikube should switch to using the local volume provisioner instead of the hostpath-provisioner, and then that may resolve most of these issues. No matter what, even if the hostpath provisioner can set proper permissions (777 by default or even by allowing the storageClass to specify the permissions), the user/group of the volume will always be wrong according to fsGroup, which can still break certain things that assume a particular user. |
Yup, thank you @chancez. Your summary confirms what I’ve gleaned from the K8s docs here. I’m thinking to submit a PR for the Redis helm chart that would allow consumers to override the runAsUse and fsGroup settings - but that feels like a hack. I don’t have enough experience with this sort of thing to have a feeling for the right approach to this scenario. Sent with GitHawk |
I think being able to set those values will help in many cases. I use that to make jenkins not fail on minikube when using PVCs, but i also have serverspec tests to validate jenkins comes up correctly, and currently while things work my tests fail in minikube due to the owner/group on the files being root, so it's not a silver bullet. |
It is required to workround issue with PVC on minikube where mounted foldres are not writable for non root users kubernetes/minikube#1990
It is required to workround issue with PVC on minikube where mounted foldres are not writable for non root users kubernetes/minikube#1990
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I found a workaround for rancher kubernetes on this same issue and I found my way here through a google search to find a solution. In case it helps others here is the workaround I used. Create an init container a level above where you want to mount your writable directory. (I want /data/myapp/submission but I create a volume at /data/myapp, then in the command for that container create the submission directory and chown it to the users numeric userid. The account and uid do not need to exist in init container. When the main container(s) come up the directory you wish to write in will have the correct ownership, and you can use it as expected. initContainers:
Originally I had tried chown-ing the mount itself, and not a directory below - the behavior in that instance was odd - it acted if it could write files but they silently disappeared after creation. |
Observed this issue today, doesn't seem to be any work around other than init containers. |
Also bumped in to this "Permission denied" error when mounting a hostPath PersistentVolume to a container which uses a non-root This isn't an issue using vanilla Docker and a named volume on my local host if I |
this should be reopened? |
I think so @AkihiroSuda - The only workaround I found was to grant my USER sudo privileges in order to chown the mount at runtime, which pretty much negates the point of using a non-root user. |
I still have this issue with minikube. fsGroup configuration does not apply & the volume that I mounted using hostPath has still as owner root & group root. I don't have other choice than going through initContainers to change the owner with an old "chown" |
Hi, I'm also running into this issue with some helm charts that explicitely forbid running their containers as root (e.g. bitnami/kube-prometheus). Could this be reopened? |
Also having this issue. Started a minikube cluster (with Funnily enough, I found that if the pod gets scheduled on the control plane node ( |
Please reopen. |
其他都正常,pod启动不了,有知道解决办法的吗?pod报错如下: |
As a workaround I am deploying a daemonset which mounts the hostpath-provisioner directory and sets all subdirs to 777 every second. apiVersion: v1
kind: Namespace
metadata:
name: minikube-pv-hack
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: minikube-pv-hack
namespace: minikube-pv-hack
spec:
selector:
matchLabels:
name: minikube-pv-hack
template:
metadata:
labels:
name: minikube-pv-hack
spec:
terminationGracePeriodSeconds: 0
containers:
- name: minikube-pv-hack
image: registry.access.redhat.com/ubi8:latest
command:
- bash
- -c
- |
while : ; do
chmod 777 /target/*
sleep 1
done
volumeMounts:
- name: host-vol
mountPath: /target
volumes:
- name: host-vol
hostPath:
path: /tmp/hostpath-provisioner/default |
is there any solution here? i am encountering the same problems |
Can this be reopened and properly fixed please? After almost 7 years, there's still no resolution on this issue and i can still see this happening when i deploy helm charts on my minikube cluster with a 3 nodes (control-plane & worker nodes). Interestingly this does not happen if i deploy my helm charts on minikube cluster with 1 node (just control-plane). |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report
Please provide the following details:
Environment:
Minikube version (use
minikube version
): v0.21.0cat ~/.minikube/machines/minikube/config.json | grep DriverName
): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): minikube-v0.20.0.isoWhat happened:
Since we don't want to allow escalating privileges in the pod, we can't use the PVC mount at all.
What you expected to happen:
Some way of specifying in the PVC what uid / gid the hostPath should be owned by, so we can write to it.
How to reproduce it (as minimally and precisely as possible):
kubectl apply -f the following file:
It fails with the following output:
If you set the fsGroup and runAsUser to 0, it succeeds.
The text was updated successfully, but these errors were encountered: