Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--feature-gates=DynamicVolumeProvisioning=false not turning off dynamic volume provisioning? #1240

Closed
shufflingB opened this issue Mar 14, 2017 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@shufflingB
Copy link

shufflingB commented Mar 14, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Minikube version (use minikube version): minikube version: v0.17.1

Environment:

  • OS (e.g. from /etc/os-release): macOS Sierra 10.12.3
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): "DriverName": "xhyve"
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v1.0.7
  • Install tools:
  • Others:

What happened:

I'm trying to characterise why the creating and using a persistent volume example from here:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

which previously worked for me, but now doesn't, for more see:

kubernetes/website#2803.

I thought that perhaps if I explicitly turned off the dynamic volume provisioning when I started minikube it might stop the dynamic provisioning and subsequent got the example working again without having to resort to annotation to fix/workaround, it doesn't make any difference though.

What you expected to happen:

The claim should not have triggered created the dynamic (pvc-... ) volume, at least if the example is correct.

How to reproduce it (as minimally and precisely as possible):

  1. minikube start --feature-gates=DynamicVolumeProvisioning=false
  2. Follow steps in https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/ to create pv and pvc.
  3. kubectl get pvc and pv should show something similar to, what I see, e.g.
foobar$ kubectl get pvc
NAME            STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
task-pv-claim   Bound     pvc-ebd839f2-08a5-11e7-9258-1af29ba302ad   3Gi        RWO           21s

foobar$ kubectl get pv 
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                   REASON    AGE
pvc-ebd839f2-08a5-11e7-9258-1af29ba302ad   3Gi        RWO           Delete          Bound       default/task-pv-claim             3m
task-pv-volume                             10Gi       RWO           Retain          Available                                     3m

As can be seen, volume that was manually created remains unbound and the pvc has triggered the creation of a temporary volume, which is unexpected.

If the experiment is repeated without --feature-gates=DynamicVolumeProvisioning=false then it shows no change in behaviour i.e. the example remains broken in both cases.

Anything else do we need to know:

@r2d4 r2d4 added the kind/bug Categorizes issue or PR as related to a bug. label Mar 14, 2017
@aaron-prindle
Copy link
Contributor

Can you post the output of minikube logs? There should be a log message stating that feature gates have been enabled:
https://github.com/kubernetes/minikube/blob/master/cmd/localkube/cmd/start.go#L77

@shufflingB
Copy link
Author

Hi Aaron,

Yep, got that ...

-- Logs begin at Fri 2017-03-17 07:41:31 UTC, end at Fri 2017-03-17 07:45:09 UTC. --
Mar 17 07:41:46 minikube systemd[1]: Starting Localkube...
Mar 17 07:41:46 minikube localkube[3282]: I0317 07:41:46.579317    3282 start.go:77] Feature gates:%!(EXTRA string=DynamicVolumeProvisioning=false)
Mar 17 07:41:46 minikube localkube[3282]: I0317 07:41:46.579533    3282 feature_gate.go:189] feature gates: map[DynamicVolumeProvisioning:false]
Mar 17 07:41:46 minikube localkube[3282]: localkube host ip address: 192.168.64.5

Rest of the log whilst running the example is attached below.

Kind regards

Jon

Log from minikube start --feature-gates=DynamicVolumeProvisioning=false.txt

@fschuh
Copy link

fschuh commented Mar 28, 2017

I'm having this same issue with Minikube 0.17.1 on Windows 7.
No matter what I do, I can't get my PersistentVolumeClaims to bind to existing PersistentVolumes. It always ends up dynamically creating a new volume, just as described by @shufflingB .

I'm not using the --feature-gates=DynamicVolumeProvisioning option, but regardless I would expect my PersistentVolumes to be bound and not ignored by the claims.

Is this a bug or expected behavior in Minikube?

@r2d4
Copy link
Contributor

r2d4 commented Mar 30, 2017

I'm not sure exactly why the option is getting ignored, but in the next version of minikube you can disable dynamic hostpath provisioning by minikube addons disable default-storageclass

#1289

@alexef
Copy link

alexef commented Apr 6, 2017

It's also happening to me.

@dlorenc
Copy link
Contributor

dlorenc commented Oct 19, 2017

Does disabling the addon work?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 17, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 16, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants