Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using "local-path" in persistent volume requires sudo to edit files on host node? #1823

Closed
leeallen337 opened this issue May 23, 2020 · 11 comments
Assignees

Comments

@leeallen337
Copy link

Version:
k3s version v1.18.2+k3s1 (698e444)

K3s arguments:
Installed with curl -sfL https://get.k3s.io | sh -

Describe the bug
When running a persistent volume with a persistent volume claim using local-path, the files on the host node are read only and require sudo to edit

To Reproduce

// persistent-volume.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: foo
  labels:
    directory: some-storage
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi
  storageClassName: local-path
  local:
    path: /home/<username>/configuration
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - ubuntu
  persistentVolumeReclaimPolicy: Retain
// persistent-volume-claim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: foo-claim
spec:
  storageClassName: local-path
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      directory: some-storage
// service.yaml

apiVersion: v1
kind: Service
metadata:
  name: foo-service
  labels:
    foo: baz
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  selector:
    foo: baz
  ports:
    - port: 8000
      targetPort: 8000
      protocol: TCP

// deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo-deployment
  labels:
    foo: baz
spec:
  replicas: 1
  selector:
    matchLabels:
      foo: baz
  template:
    metadata:
      labels:
        foo: baz
    spec:
      volumes:
        - name: foo-config
          persistentVolumeClaim:
            claimName: foo-claim
      containers:
        - name: foo
          image: <some image>
          ports:
            - containerPort: 8000
          volumeMounts:
            - name: foo-config
              mountPath: /config

Expected behavior
When I have files in the host node's /configuration directory without the deployment and persistent volumes running. I can edit the files fine without sudo. I thought it would be more like sharing files in volumes in Docker where, when editing files on the host machine, no privileges are required.

Actual behavior
Once I run the manifests and the /configuration directory is being shared as a persistent volume, when editing the files it states the files are read only and require sudo to write and save.

Additional context / logs
Let me know if this is the actual expected behavior. I also tried to sanitize my manifests a little so if anything is confusing let me know

@leeallen337
Copy link
Author

When I use kubectl exec -it to get into the actual container I can modify the files without sudo

@brandond
Copy link
Member

Depending on how the containers you're running are configured, your pods will likely either run as root, or a user with a different UID than your account. Any files created by other users would not be editable by your user without using sudo.

@leeallen337
Copy link
Author

@brandond That makes a log of sense. I did a kubectl exec -it into the running container an whoami returned root. After doing some research it looks like there are only somewhat okay workarounds to get the container working not as root so I'm looking for other alternatives at the moment.

I installed the acl package and tried to give my user rwx access to the configuration directory but that didn't work.

Are there some ways to give my user write access to the persistent volume on the host without having to use sudo? I did some research and a few options look like:

  1. chown -R $USER:$USER configuration/
  2. chmod -R 777 configuration/

I haven't yet tried the above but any help would be greatly appreciated 🙏

@brandond
Copy link
Member

  1. If you chown it, then the files are writable by you but not the container user (if it runs as non-root)
  2. If you chmod 777, the files are now writable by everyone on the system (bad security practice).

The best answer is to exec into the pod to edit the files, but this is an anti-pattern as well. You don't normally manually edit files within pods or volumes, you uses some sort of automation to push changes out to your cluster.

@usrbinkat
Copy link

usrbinkat commented Jun 8, 2020

I encountered this as well on a clean install of CentOS8. I can work around by either elevating privileges manually in the busybox create-pvc pod or manually intervene via chmod at the host level.

Reproducible by installing v1.18.3+k3s1 (96653e8d) via:
curl -sfL https://get.k3s.io | sh -
Then following the docs to test pvc creation:

https://rancher.com/docs/k3s/latest/en/storage/

Result:

root@one instance$ kubectl logs -n kube-system pod/create-pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919
mkdir: can't create directory '/data/pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919': Permission denied

Troubleshootin Reference

root@one instance$ kubectl get pvc
NAME             STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
local-path-pvc   Pending                                      local-path     36m
root@one instance$ kubectl describe persistentvolumeclaim/local-path-pvc
Name:          local-path-pvc
Namespace:     default
StorageClass:  local-path
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: ensign
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    volume-test
Events:
  Type     Reason                Age                  From                                                                                               Message                                                                              
  ----     ------                ----                 ----                                                                                               -------                                                                              
  Normal   WaitForFirstConsumer  35m (x5 over 36m)    persistentvolume-controller                                                                        waiting for first consumer to be created before binding                              
  Normal   Provisioning          7m15s (x7 over 35m)  rancher.io/local-path_local-path-provisioner-6d59f47c7-xckpt_00cebef8-a948-11ea-8f14-aaa8b9d60bb9  External provisioner is provisioning volume for claim "default/local-path-pvc"       
  Warning  ProvisioningFailed    5m14s (x7 over 33m)  rancher.io/local-path_local-path-provisioner-6d59f47c7-xckpt_00cebef8-a948-11ea-8f14-aaa8b9d60bb9  failed to provision volume with StorageClass "local-path": failed to create volume pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919: create process timeout after 120 seconds
  Normal   ExternalProvisioning  59s (x139 over 35m)  persistentvolume-controller                                                                        waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
root@one instance$ kubectl logs -n kube-system pod/local-path-provisioner-6d59f47c7-xckpt
ERROR: logging before flag.Parse: I0608 06:00:06.245965       1 controller.go:927] provision "default/local-path-pvc" class "local-path": started
time="2020-06-08T06:00:06Z" level=info msg="Creating volume pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919 at ensign:/var/lib/rancher/k3s/storage/pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919" 
ERROR: logging before flag.Parse: I0608 06:00:06.254082       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc", UID:"f63f6e61-13ca-42a9-9e0f-3d3e37ed4919", APIVersion:"v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc"
ERROR: logging before flag.Parse: W0608 06:02:06.698958       1 controller.go:686] Retrying syncing claim "default/local-path-pvc" because failures 5 < threshold 15
ERROR: logging before flag.Parse: E0608 06:02:06.699004       1 controller.go:701] error syncing claim "default/local-path-pvc": failed to provision volume with StorageClass "local-path": failed to create volume pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919: create process timeout after 120 seconds
ERROR: logging before flag.Parse: I0608 06:02:06.699439       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc", UID:"f63f6e61-13ca-42a9-9e0f-3d3e37ed4919", APIVersion:"v1", ResourceVersion:"1422", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "local-path": failed to create volume pvc-f63f6e61-13ca-42a9-9e0f-3d3e37ed4919: create process timeout after 120 seconds
root@one instance$ kubectl describe configmap local-path-config -n kube-system                                                                                                                                                                
Name:         local-path-config
Namespace:    kube-system
Labels:       objectset.rio.cattle.io/hash=183f35c65ffbc3064603f43f1580d8c68a2dabd4
Annotations:  objectset.rio.cattle.io/applied:
                {"apiVersion":"v1","data":{"config.json":"{\n        \"nodePathMap\":[\n        {\n                \"node\":\"DEFAULT_PATH_FOR_NON_LISTED_...                                                                                 
              objectset.rio.cattle.io/id:
              objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
              objectset.rio.cattle.io/owner-name: local-storage
              objectset.rio.cattle.io/owner-namespace: kube-system

Data
====
config.json:
----
{
        "nodePathMap":[
        {
                "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                "paths":["/var/lib/rancher/k3s/storage"]
        }
        ]
}
Events:  <none>
root@one pvc$ kubectl describe pod/create-pvc-657e8959-445e-497e-bee9-c7ac3a8e2a69 -n kube-system
Name:         create-pvc-657e8959-445e-497e-bee9-c7ac3a8e2a69
Namespace:    kube-system
Priority:     0
Node:         ensign/10.180.0.43
Start Time:   Mon, 08 Jun 2020 06:32:58 +0000
Labels:       <none>
Annotations:  <none>
Status:       Failed
IP:           10.42.0.16
IPs:
  IP:  10.42.0.16
Containers:
  local-path-create:
    Container ID:  containerd://b9cb63cee62815a078b7774d68f578a312e3de4e9c8f3c31a1c47d2a48851e21
    Image:         busybox
    Image ID:      docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209
    Port:          <none>
    Host Port:     <none>
    Command:
      mkdir
      -m
      0777
      -p
      /data/pvc-657e8959-445e-497e-bee9-c7ac3a8e2a69
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 08 Jun 2020 06:32:59 +0000
      Finished:     Mon, 08 Jun 2020 06:32:59 +0000
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data/ from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-54k9x (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/rancher/k3s/storage
    HostPathType:  DirectoryOrCreate
  default-token-54k9x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-54k9x
    Optional:    false
QoS Class:       BestEffort

@usrbinkat
Copy link

usrbinkat commented Jun 8, 2020

Resolved with the following edits to /var/lib/rancher/k3s/server/manifests/local-storage.yaml

Deployment - local-path-provisioner

62c62
--         image: rancher/local-path-provisioner:v0.0.11
++         image: rancher/local-path-provisioner:v0.0.14

ConfigMap - local-path-config

103c104
--                       "paths":["/var/lib/rancher/k3s/storage"]
++                       "paths":["/opt/local-path-provisioner/"]

Apply

kubectl apply -f /var/lib/rancher/k3s/server/manifests/local-storage.yaml

@identitymonk
Copy link

Hi @usrbinkat

It was working for me two weeks ago on a Centos 7 and no longer works on a fresh install minimal centos 7. I got the same error as you.

Unfortunately your workaround does not work for me. Also, I find it strange to redirect the K3s storage on /opt as it should really belong to /var. Any additional info you can provide on why you choose /opt here ?
Thanks

@identitymonk
Copy link

My resolution was disabling SELinux. I will get to the message related to the denial of creation of the PVC.

@usrbinkat
Copy link

usrbinkat commented Jun 10, 2020

@identitymonk It was a personally un-opinionated choice based on the official docs* yaml config map** diff'ed against the manifest built into k3s default local path provisioner deployment. I merely diffed the 2 sets of yaml & applied the differences I found. I could troubleshoot it further on my side with enough cycles to be sure.

Refrence: 1 2

@usrbinkat
Copy link

usrbinkat commented Jun 10, 2020

I also confirmed @identitymonk 's conclusion.
1. Re-deployed CentOS 8
2. Deploy k3s from scratch
3. Disable selinux
4. Deploy example pvc backed nginx demo per docs
5. observed the issue is resolved

root@one .ccio$ ssh ccio@ensign k3s --version
k3s version v1.18.3+k3s1 (96653e8d)
root@one .ccio$ kubectl get node -o wide
NAME     STATUS   ROLES    AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME                                                                                           
ensign   Ready    master   37m   v1.18.3+k3s1   10.180.0.44   <none>        CentOS Linux 8 (Core)   4.18.0-147.8.1.el8_1.x86_64   containerd://1.3.3-k3s2
root@one .ccio$ ssh ccio@ensign                                                          
Last login: Wed Jun 10 19:56:54 2020                                                            
[ccio@ensign ~]$ sudo setenforce 0                                                                                    
[ccio@ensign ~]$ sudo getenforce                                                                                      
Permissive                                                                                                            
[ccio@ensign ~]$ exit                                                                                                 
logout                                                                                                                
Connection to ensign closed. 
root@one .ccio$ cat <<EOF | kubectl apply -f -                                                                       
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-path-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 2Gi
EOF
persistentvolumeclaim/local-path-pvc created
root@one .ccio$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: volume-test
  namespace: default
spec:
  containers:
  - name: volume-test
    image: nginx:stable-alpine
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: volv
      mountPath: /data
    ports:
    - containerPort: 80
  volumes:
  - name: volv
    persistentVolumeClaim:
      claimName: local-path-pvc
EOF
pod/volume-test created
root@one .ccio$ kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE    
local-path-pvc   Bound    pvc-e4f32a10-b858-4edb-8713-df64ff4cbdfe   2Gi        RWO            local-path     36s

@identitymonk
Copy link

refer to #1821

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants