-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller) #177
Comments
/cc @skriss |
@jingxu97 I think we're going to have something in alpha for 1.7, can we can we please set the milestone to 1.7? |
@kubernetes/sig-storage-feature-requests could someone please update the issue description to the new template. Thanks! |
I will work on it. Thanks!
Best,
Jing
…On Thu, May 4, 2017 at 8:43 AM, caleb miles ***@***.***> wrote:
@kubernetes/sig-storage-feature-requests
<https://github.com/orgs/kubernetes/teams/sig-storage-feature-requests>
could someone please update the issue description to the new template.
Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#177 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ASSNxR4fUmmxnJ4QQc-Fe0K5CKT5tRO9ks5r2fIcgaJpZM4Lrsji>
.
--
- Jing
|
Is this based on the proposal here: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-snapshotting.md ? |
No, the new proposal doc is here
https://docs.google.com/document/d/17WS4Wk4MXRH24i-BpMpIFo5F-SNoRkm_KtkBMZEEoAo
Please let me know if you cannot open it. Thanks!
Jing
…On Tue, May 9, 2017 at 7:34 AM, Dimitrios Karagiannis < ***@***.***> wrote:
Is this based on the proposal here: https://github.com/kubernetes/
community/blob/master/contributors/design-proposals/volume-snapshotting.md
?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#177 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ASSNxVKLqOq9lsJ72MrdqkNN-2pbfmtIks5r4HmQgaJpZM4Lrsji>
.
--
- Jing
|
@jingxu97 thanks! I can't access it but I just requested access. |
@jingxu97 I wanted to add an important note here regarding overall approach. I'm not this is the right place to put this, but I would be a happy to be guided to the right forum to bring this up. Coming from a long background in oVirt and OSP I feel there is an important aspect that needs to be discuss which is ownership of the state of snapshots (and overall volume metadata) and discoverablility of the this metadata. In cloud and on premise users (developers and admins) might prefer the native storage APIs over the Kube APIs. Kube is also not the one creating the snapshot, it is the cloud service or the storage itself, therefore it is not the owner of this metadata. In OSP they have already made this mistake with Cinder that forces users that want to use snapshots to go through the Cinder API to have it be available in the OSP environment. Cinder doesn't know snapshot created on the storage and if you lose the Cinder you lose everything as the metadata that counts is in his stateful DB. What I'm trying to say is that it is important that Kube doesn't try to be the owner of the volume metadata, which means things like periodically checking if a new snapshot was created directly via the storage service API or that the ID used for the snapshot is the storage service snapshot ID. It is clear that Kube should expose snapshotting since it very needed to the container use case, but it is very important it doesn't become a storage abstraction service like Cinder. We do not want to chase the storage service features or limit user to use the Kube API, it should be a option. We also want to allow discovery of volumes no matter where they where created. As we get to more complex features like QOS and oversubscription for example, we want to allow exposing and reusing the storage service capabilities, not replace them or block users from using them via the cloud service or storage management APIs. |
@jingxu97 any progress on the feature description? @kubernetes/sig-storage-feature-requests |
@mdelio @jingxu97 please, update the feature description with the new template - https://github.com/kubernetes/features/blob/master/ISSUE_TEMPLATE.md |
@idvoretskyi I updated the original comment with the new template. This feature is not actually shipping any bits in the Kubernetes core for v1.7, so therefore I moved it to the |
@saad-ali any updates for 1.8? Is this feature still on track for the release? |
cc @tsmetana |
@idvoretskyi this is still on track: kubernetes-retired/external-storage#331 |
Thanks @eagleusb ! I'll submit a placeholder doc soon. |
Hi @xing-yang 👋 Thanks for your update. In the meantime, the docs placeholder deadline is almost here. Please make sure to create a placeholder PR against the Also, please keep in mind the important upcoming dates:
|
Hi @eagleusb , Thanks for the reminder! Doc PR is submitted here: kubernetes/website#24849 |
Hi @xing-yang Looks like kubernetes/kubernetes#95282 is still open but being actively worked on. Just a reminder that Code Freeze is coming up in 2 days on Thursday, November 12th. All PRs must be merged by that date, otherwise an Exception is required. Best, |
Thanks for the reminder! We are trying to get reviewers to finish reviewing and approving the PR by the 11/12 deadline. |
This PR that updates snapshot CRDs to v1 for cluster addon is merged: kubernetes/kubernetes#96383 |
Great! just waiting on kubernetes/kubernetes#95282 |
kubernetes/kubernetes#95282 is approved. Just waiting for it to be merged:). |
Yay! It's merged! Updating tracking sheet. Congrats! 🎆 |
Thanks @kikisdeliveryservice! |
Hi @xing-yang Can you update the kep.yaml to reflect a status of implemented:
Once that merges we can then close this issue. Thanks! |
Will submit a PR soon. Thanks! |
Thanks @xing-yang it's merged! Feel free to close this issue 😄 |
Hello, 1.21 Enhancement lead here. /close |
@annajung: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @jingxu97 👋 1.24 RT Comms lead here. I saw a note that the VolumeSnapshot v1beta1 CRD will be removed in 1.24. Would this be appropriate to include in our 1.24 Removals and Deprecations blog post? |
Yes, I think so @mickeyboxell |
Thanks for confirming! @jingxu97 What information would you like communicated in the blog? I read that the functionality entered beta in 1.20 and was a little confused about the v1beta1 CRD now being removed. Did the project graduate to stable or was it replaced with an alternative API? |
VolumeSnapshot went GA in 1.20 |
@mickeyboxell I added that entry in the spreadsheet. VolumeSnapshot went GA in 1.20. Following K8s 1.21 release, we deprecated VolumeSnapshot v1beta1. Since VolumeSnapshot is out-of-tree CRD, we have the deprecation message in the release note here: Now we are ready to remove VolumeSnapshot v1beta1 CRD in our next external-snapshotter release which will be v6.0, shortly after K8s 1.24 release. We want to add a message in the deprecation/removal blog to indicate that VolumeSnapshot v1beta1 CRD will be removed in K8s 1.24. Hope this helps. |
@mickeyboxell In addition to the deprecation/removal blog, I'd also like to have a release note in K8s v1.24 release notes to indicate VolumeSnapshot v1beta1 CRD will be removed. Where can I add that release note? |
I'm not sure how their process works. You may want to reach out to the #release-notes channel for more information. |
Feature Description
Old description:
The text was updated successfully, but these errors were encountered: