-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Velero backup should capture all cluster scoped CRDs associated with an application running in a namespace #4876
Comments
@narendermehra27 Velero currently pulls in any CRDs for which there are associated CRs included in the backup. As you mentioned, though, if there are short-lived CRs (or, more generally, there just don't happen to be any CRs currently existing at backup time for the CRD), those aren't included. Regarding the request of "all CRDs associated with the application" -- how would you go about getting this list? Currently we're basing "associated with the namespace" on "actually existing CRs in that namespace". While this is insufficient, what alternate means of identifying CRDs other than "grab every CRD in the cluster" would meet your needs? |
@sseago thanks for your reply. Yes, wanted to understand the exact same thing, whether there are any foolproof means to identify all CRDs associated with an application short of explicitly labeling them beforehand. In the absence of that, wanted to know your opinion about capturing all cluster scoped CRDs as a way. Is it safe to backup all CRDs on a cluster where an application is backed up and apply them on a second cluster (where we would try to restore that application). In my understanding, the current behavior is that a resource is not overwritten if it already exists and probably with this future enhancement - #4842 - this would be made as tunable. But then during restore, we would like to overwrite/patch selectively only in case of relevant cluster scoped resources which again we have no way of identifying so back to square one. Any thoughts are welcome. |
@narendermehra27 Currently, the only way to pull in all cluster-scoped resources is to set |
Describe the problem/challenge you have
[A description of the current limitation/problem/challenge that you are experiencing.]
We want to understand that why all CRDs associated with an application are not backed up with default include-cluster-resources (nil value) option when we do a backup at namespace level. There could be some CRs which are short lived and may not be present in namespace at the time of backup (their corresponding cluster scoped CRDs would not be picked up by velero as a result because they are treated as not relevant), but the application might require all CRDs at the time of restore to start successfully. We have observed that application startup fails upon velero restore because some cluster scoped CRDs are absent.
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
In namespace scoped backup for an application, with default include-cluster-resources (nil value) option, all global scoped CRDs associated with that application should be backed up as well irrespective of whether CRs are instantiated in application namespace or not.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
Velero version (use velero version): 1.6.0
Kubernetes version (use kubectl version):
Kubernetes installer & version:
oc version
Client Version: 4.8.2
Server Version: 4.9.21
Kubernetes Version: v1.22.3+fdba464
Cloud provider or hardware configuration: OpenShift Platform 4.9
OS (e.g. from /etc/os-release): RHEL CoreOS 8.1
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
👍 for "The project would be better with this feature added"
👎 for "This feature will not enhance the project in a meaningful way"
The text was updated successfully, but these errors were encountered: