Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creation of additional storage classes with dynamic volume provisioning: impossible or documentation lacking? #11947

Open
victor-sudakov opened this issue Jul 9, 2021 · 27 comments
Labels
addon/storage-provisioner Issues relating to storage provisioner addon help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@victor-sudakov
Copy link

victor-sudakov commented Jul 9, 2021

Running minikube version: v1.20.0, commit: c61663e

There is only one storage class created when starting a minikube:

$ kubectl get sc
NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  18h

Dynamic Provisioning in this storage class works fine.

I would like to add more storage classes with dynamic volume provisioning, so that I could test my PersistentVolumeClaim manifests (written for EKS) on Minikube without modifications. These storage classes need not be different physically or programmatically from the standard Minikube sc, but I need to test manifests for other clusters without modifications. Those manifests have different "storageClassName" attributes, like "gp3", "sc1" etc.

If the creation of additional storage classes with dynamic volume provisioning is possible, I cannot find it anywhere if it's documented for Minikube.

@ilya-zuyev ilya-zuyev added the kind/support Categorizes issue or PR as a support question. label Jul 9, 2021
@ilya-zuyev
Copy link
Contributor

ilya-zuyev commented Jul 9, 2021

@victor-sudakov in general, adding a new storage class to minikube is the same process as for regular kubernetes cluster.
https://kubernetes.io/docs/concepts/storage/storage-classes/

I guess, there's some specific for some SCs, related to minikube internal environment. If you have any issues, please feel free to report it here.

➜  minikube git:(master) ✗ kubectl get sc
NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  19h

➜  minikube git:(master) ✗ kubectl apply -f - <<EOF 
heredoc> apiVersion: storage.k8s.io/v1
heredoc> kind: StorageClass
heredoc> metadata:
heredoc>   name: local-storage
heredoc> provisioner: kubernetes.io/no-provisioner
heredoc> volumeBindingMode: WaitForFirstConsumer
heredoc> EOF
storageclass.storage.k8s.io/local-storage created

➜  minikube git:(master) ✗ kubectl get sc
NAME                 PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-storage        kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  4s
standard (default)   k8s.io/minikube-hostpath       Delete          Immediate              false                  19h

@victor-sudakov
Copy link
Author

@ilya-zuyev I have a question. What is with this provisioner: kubernetes.io/no-provisioner parameter in your example? Should not it be provisioner: k8s.io/minikube-hostpath ? Will your storage class even work (allocate storage on the host) that way?

@victor-sudakov
Copy link
Author

OK, I have created a SC like you advised

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2
provisioner: k8s.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

The storage class is there

$ kubectl get sc
NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2                  k8s.io/no-provisioner      Delete          WaitForFirstConsumer   false                  2m21s

but my PVCs using that storage class are in the Pending state with the message "waiting for a volume to be created, either by external provisioner "k8s.io/no-provisioner" or manually created by system administrator"

Obviously I'm doing something wrong, or your advice is incomplete?

@victor-sudakov
Copy link
Author

Maybe it's useful to provide the complete output:

$ kubectl describe persistentvolumeclaim/ebspod1
Name:          ebspod1
Namespace:     default
StorageClass:  gp2
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: k8s.io/no-provisioner
               volume.kubernetes.io/selected-node: minikube
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       diskopod1
Events:
  Type    Reason                Age                  From                         Message
  ----    ------                ----                 ----                         -------
  Normal  WaitForFirstConsumer  2m15s                persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  ExternalProvisioning  2s (x11 over 2m15s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s.io/no-provisioner" or manually created by system administrator
$ 

@victor-sudakov victor-sudakov changed the title Creation of additional storage classes: impossible or documentation lacking? Creation of additional storage classes with dynamic volume provisioning: impossible or documentation lacking? Jul 12, 2021
@victor-sudakov
Copy link
Author

If I create a storage class with provisioner: k8s.io/minikube-hostpath hoping for dynamic volume provisioning to work:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: io1
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Then PVCs fail with the failed to get target node: nodes "minikube" is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot get resource "nodes" in API group "" at the cluster scope error:

Events:
  Type     Reason                Age                  From                                                                    Message
  ----     ------                ----                 ----                                                                    -------
  Normal   WaitForFirstConsumer  2m15s                persistentvolume-controller                                             waiting for first consumer to be created before binding
  Warning  ProvisioningFailed    30s (x4 over 2m15s)  k8s.io/minikube-hostpath_minikube_40f19f28-8735-4784-ab6d-ccca0a4d856a  failed to get target node: nodes "minikube" is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot get resource "nodes" in API group "" at the cluster scope
  Normal   ExternalProvisioning  7s (x11 over 2m15s)  persistentvolume-controller                                             waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator

@victor-sudakov
Copy link
Author

I have actually found a workaround. I've created many PVs, and several SCs named "gp2", "gp3" referring to those static PVs. So now I can use those SCs for testing purposes. But why cannot I have those PVs created dynamically for me by Minikube's dynamic storage controller?
stupid_workaround.yaml.txt

@victor-sudakov
Copy link
Author

Attached
testcase.yaml.txt
is a complete testcase for the problem. Apply the manifest and watch the "diskopod1" pod never starting.

@victor-sudakov
Copy link
Author

So, how can I make non-default SCs use the minikube-hostpath provisioner (in order to allocate PVs dynamically)?

@medyagh
Copy link
Member

medyagh commented Sep 1, 2021

@victor-sudakov tthanks for providing your experience and sharing the work arround ! this could be a tutorial on our website I would accept a PR to add it to our website

and if you want you could make minikube storage provisioneer configurable

@victor-sudakov the storage provisioner is implemented as an addon itself so you should be able to disable the default one

$ minikube addons disable storage-provisioner

does that help ?

@medyagh
Copy link
Member

medyagh commented Sep 1, 2021

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 1, 2021
@medyagh medyagh added the addon/storage-provisioner Issues relating to storage provisioner addon label Sep 1, 2021
@victor-sudakov
Copy link
Author

the storage provisioner is implemented as an addon itself so you should be able to disable the default one

$ minikube addons disable storage-provisioner

does that help ?

Actually, the bundled storage provisioner is a very nice thing to have, I don't want to disable it, I would prefer for it to support arbitrary storage classes transparently.

@sharifelgamal
Copy link
Collaborator

Yeah this just isn't something our storage-provisioner/default-storageclass addons support currently, but there is no theoretical reason why we couldn't. We would happily accept a PR that does this.

@sharifelgamal sharifelgamal added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Sep 15, 2021
@victor-sudakov
Copy link
Author

Yeah this just isn't something our storage-provisioner/default-storageclass addons support currently, but there is no theoretical reason why we couldn't. We would happily accept a PR that does this.

I'm afraid I'm not qualified enough to offer a PR for the issue. Thanks for the response that this is currently technically not possible and not a misconfiguration on my part. Shall we close the issue?

@yayaha
Copy link
Contributor

yayaha commented Oct 1, 2021

I'd like to contribute, but I need someone with enough knowledge to point me to the right direction. Thanks!

@medyagh
Copy link
Member

medyagh commented Oct 27, 2021

@victor-sudakov there is a PR that says could fix this here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ? #12797

do u mind trying the binary of that PR and see if that fixes the issue?

http://storage.googleapis.com/minikube-builds/12797/minikube-linux-amd64
http://storage.googleapis.com/minikube-builds/12797/minikube-darwin-amd64
http://storage.googleapis.com/minikube-builds/12797/minikube-windows-amd64.exe

@victor-sudakov
Copy link
Author

@victor-sudakov there is a PR that says could fix this here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ? #12797

do u mind trying the binary of that PR and see if that fixes the issue?

@medyagh Yes, with that minikube binary my test case (see above) is now working and pod running:

$ kubectl get sc,pv,pvc
NAME                                             PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/gp5                  k8s.io/minikube-hostpath   Delete          WaitForFirstConsumer   false                  3m25s
storageclass.storage.k8s.io/standard (default)   k8s.io/minikube-hostpath   Delete          Immediate              false                  5m8s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
persistentvolume/pvc-0cd25d47-1129-4cf6-81a0-0cdbdbf765a1   1Gi        RWO            Delete           Bound    test1/ebspod1   gp5                     3m24s

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/ebspod1   Bound    pvc-0cd25d47-1129-4cf6-81a0-0cdbdbf765a1   1Gi        RWO            gp5            3m25s
$ 

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2022
@victor-sudakov
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2022
@cvetomir-todorov
Copy link

+1 from me.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 18, 2022
@victor-sudakov
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 15, 2022
@victor-sudakov
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2023
@DerekTBrown
Copy link

DerekTBrown commented May 2, 2023

I had the same issue/experience. Ultimately, here is what ended up working:

# tilt-resources.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: storage-provisioner-cluster-role-binding
  namespace: kube-system
subjects:
- kind: ServiceAccount
  name: storage-provisioner
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-storage-class
provisioner: k8s.io/minikube-hostpath
volumeBindingMode: WaitForFirstConsumer
# Tiltfile
k8s_yaml("./tilt-resources.yaml")

It seems like the minikube storage provisioner addon should grant the provisioner the appropriate permissions from the get-go.

@mindthecap
Copy link

To fix the cannot get resource "nodes" in API group "" at the cluster scope a better approach is to add minimal required rights for the user.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: storage-provisioner-minikube-volume-provisioner
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: storage-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: minikube-volume-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: minikube-volume-provisioner
rules:
  - verbs:
      - get
      - list
      - watch
    apiGroups:
      - ""
    resources:
      - nodes

This will add a new ClusterRole for minikube volume provisioner and attaches the role to storage-provisioner. I don't know if the account names are persistent between Minikube versions, i'm stuck with v1.31.2 right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/storage-provisioner Issues relating to storage provisioner addon help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants