-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out-of-tree CSI Volume Plugins #178
Comments
We aim to have a design doc for this by 1.6. |
@saad-ali any update on this feature? Docs and release notes are required (please, provide them to the features spreadsheet. |
This does not need to be tracked for 1.6. It's design only. No bits shipping as part of 1.6. Removing 1.6 label. |
@saad-ali thank you for clarifying. |
@saad-ali is the idea behind the 3rd bullet |
@saad-ali also its not clear, how is this different from out -of tree dynamic provisioners ? |
@saad-ali How does this compare with your CSI work on the CNCF ? |
Yep!
This feature (CSI) will enable completely out-of-tree volume plugins. Out-of-tree dynamic provisioners exist today as an alpha mechinism to enable volume provisioning (one functionality of volume plugins) to 1) be implmented out-of-tree, and 2) be decoupled from the rest of the volume plugin (potentially enabling multiple provisioners per volume type). Once Kubernetes supports CSI volume plugins, the only remaining benefit of Out-of-tree dynamic provisioners will be 2 (decoupled from the rest of the volume plugin, potentially enabling multiple provisioners per volume type). We can decide at that point if it makes sense to continue to support them or deprecate them.
CSI is the proposed protocol spec. The intention (or at least one of the intentions) of this feature is to implement that interface in Kubernetes. |
@saad-ali Are you targeting alpha for v1.9? Is that information up-to-date? |
Yes! All info is up to date. |
An update on this: we've incorporated external-provisioners in to the CSI design! |
@saad-ali can you point to the CSI design that has the external provisioners incorporated. Is it here kubernetes/community#1258? |
@saad-ali Will this work be in v1.9? I assume as alpha as the initial step. |
@luxas @saad-ali does "alpha" mean that csi would need to be enabled via a feature gate like all the other alpha features? Also the CSI plugin depends on other features, which are currently in alpha, like mount-propagation and perhaps containerized-mount? Will these be promoted to beta? or does this mean that in order to consume the CSI plugin, these features have to be enabled via the feature gates? |
@saad-ali whats left of this work to go to GA in 1.13? Are kubernetes/kubernetes#69690 and kubernetes/kubernetes#69688 the only 2 pending PRs for this feature? |
@saad-ali when this goes GA will there need to be any doc changes (1.13)? |
@saad-ali will there be any updates to the docs necessary for 1.13? The deadline for placeholder PRs for the 1.13 release is November 8. So it's important to make a docs PR as soon as possible if needed. Thanks! cc @idvoretskyi @AishSundar @tfogo |
Hi @saad-ali I'm an enhancements shadow checking in on how this issue is tracking. Code slush is on 11/9 and code freeze is coming up on 11/15 do you have a status update on the likelihood that this will make the the code freeze date? |
Tasks for 1.13:
We are still on track for 1.13, although timeline will be very tight. The longest pull item is moving the CSI spec to 1.0. All other tasks are close to being merged. |
@saad-ali @msau42 I understand you are working actively on getting this to GA. I see the 2 open PRs are close, but is there a tracking issue for the 1.0 spec? Can either of you attend Release burndown meeting (Zoom: https://zoom.us/j/611312756) this Wedsnesday (11/14)? We would like the discuss the latest update Go/No-GO for 1.13. Thanks |
CSI 1.0.0-rc1 Spec was cut on Monday (https://github.com/container-storage-interface/spec/releases/tag/v1.0.0-rc1). We are working to pick it up in Kubernetes (kubernetes/kubernetes#71020). @msau42 will attend the burn down. |
/reopen |
@kacole2: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We have some post GA work, like updating the in-tree controllers to use the new v1 objects. |
@msau42 but is that "enhancement" related? or is that more updating? |
It's cleanup/refactoring work that should not have any user-visible effects. So I guess this can be closed. |
We can close this and open up issues for remaining cleanup work. There are already individual enhancement issues opened for csi alpha features |
@msau42: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
user-workload-monitoring: be more accurate about alerts & silences enpoints
@childsb
@jsafrane
@thockin
Background
Kubernetes volume plugins are currently all "in-tree" meaning that their source code is included in the main Kubernetes repo. All volume plugins are compiled and ship along with kubernetes binaries.
The drawback to this approach is that it requires third-party storage vendors wanting to support Kubernetes to commit code to the Kubernetes repo, and thus be locked into Kubernetes release schedules. It requires them to make their source code public/open-source.
While the Flex volume plugin already provides a mechanism for Plugin developers to experiment with out-of-tree plugins, it provides no guarantees of backwards compatibility (since it is alpha), and is completely exec based (driver installation requires ability to deploy files to specific locations on node and master machines).
This feature aims to create a new (or adopt an existing) API for Volume Plugins in Kubernetes that:
CC @kubernetes/sig-storage-feature-requests @kubernetes/sig-storage-proposals
The text was updated successfully, but these errors were encountered: