-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backup Azure Managed Disks from multiple Resource Groups #3157
Comments
@cholm321 If I am following this correctly, the issue you are facing is an error during disk information lookup which uses the resource group from the envvars, derived from the This is, currently, an unsupported capability in Velero. We do have plans of adding support for multiple cloud credentials and I am expecting this to be addressed as a side effect of that. |
This is the design got supporting multiple credentials. |
@ashish-amarnath I would think that when Azure Managed Disks are placed across several Azure Resource Groups, Velero should just lookup the When I do Assume I have a
Why does Velero just not do that lookup? |
That metadata is specific to Azure.
So the volumesnapshotter is given the necessary info to locate the volume and call the volume provider's snapshot API. This, unfortunately, makes changing the interface a breaking change across all providers. |
I've the same issue: my AKS cluster is in a resource group while all the nodes, and therefore the PVC disks, are in another. Velero is stuck searching for the disks in AKS cluster instead of the nodes resource group, even if I change the resource group of the |
Any news about this? It's still valid for Velero helm 2.29.4 / Velero image 1.8.1 and velero plugin for azure v1.4.1 and this makes velero unable to backup dynamic volumes. |
Same here, with latest versions. Any plans to fix this within foreseeable timeframe? |
Hi guys, We have configured storage classes in different resource groups and volumes are provisioned, but during the backup, velero is using only one resource group which is configured in velero credentials. |
@ashish-amarnath Reading this again, I believe it's not about multiple credentials. With Azure, the thing is that the cluster is in resourcegroup "dev" and AKS automatically creates a second resource group like "MC_dev_myk8scluster_europewest" where it puts all the k8s resource it creates for Azure (like the said managed disks for PVCs). So maybe it's the fault of the azure plugin for velero trying to search for the disks in the wrong resource group. |
However, this worked in helm chart 2.15.0 with azure plugin 1.1.0. |
Running azure plugin 1.1.0 and up with the most current helm chart verison 2.29.6 yields no errors but instead skips the PVCs:
|
Ah, sorry. I just overread the documentation stating that you should use the generated resource group instead of the resource group your cluster is in. |
Hi guys, |
To address this issue, we may need to introduce a new version of the plugin interface which needs the support of plugin versioning. And another choice is to use the CSI plugin instead which is going to be GA in v1.9 |
@ywk253100 I think in v1.10 timeframe we may verify if CSI plugin can solve the problem. If we can solve it via CSI we may consider closing this issue. |
I have verified that this case can be resolved by taking a snapshot with the CSI plugin. For more information about how to use CSI, please refer to the doc. Closing this issue. |
We have an AKS cluster with velero installed in an enterprise setup.
On the cluster we have
app1 with managed disk in ResourceGroup 1
app2 with managed disk in ResourceGroup 2
app3 with managed disk in ResourceGroup 3
All Resource Groups has the AKS spn as contributor.
We have been poking a little around, but it seem that the ResourceGroup must be specified on velero install.
Have we missed something - is it possible to do all configuration at backup stage?
Or better, let the plugin lookup which ResourceGroup the disks are located by inspecting the pv objects.
The text was updated successfully, but these errors were encountered: