Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_site_recovery_replicated_vm wants to be replaced after updating provider to version 3.114.0 #26923

Open
1 task done
enorlando opened this issue Aug 4, 2024 · 5 comments

Comments

@enorlando
Copy link

enorlando commented Aug 4, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform Version

1.9.3

AzureRM Provider Version

3.114.0

Affected Resource(s)/Data Source(s)

azurerm_site_recovery_replicated_vm

Terraform Configuration Files

resource "azurerm_site_recovery_replicated_vm" "xx" {
  name                                      = "xx"
  resource_group_name                       = azurerm_resource_group.xx.name
  recovery_vault_name                       = azurerm_recovery_services_vault.xx.name
  source_recovery_fabric_name               = azurerm_site_recovery_fabric.xx.name
  source_vm_id                              = module.xx.vm_id
  recovery_replication_policy_id            = azurerm_site_recovery_replication_policy.xx.id
  source_recovery_protection_container_name = azurerm_site_recovery_protection_container.xx.name
  target_resource_group_id                  = azurerm_resource_group.xx.id
  target_recovery_fabric_id                 = azurerm_site_recovery_fabric.xx.id
  target_recovery_protection_container_id   = azurerm_site_recovery_protection_container.xx.id

  managed_disk = [
    {
      disk_id                       = azurerm_managed_disk.xx.id
      staging_storage_account_id    = azurerm_storage_account.xx.id
      target_disk_encryption        = []
      target_disk_encryption_set_id = azurerm_disk_encryption_set.xxe.id
      target_disk_type              = var.target-disk-type
      target_replica_disk_type      = var.target-replica-disk-type
      target_resource_group_id      = azurerm_resource_group.xx.id
    },
  ]

  network_interface {
    source_network_interface_id = module.xx.nic_id
    target_subnet_name          = azurerm_subnet.xx.name
    failover_test_subnet_name   = azurerm_subnet.xx.name
  }
}

Debug Output/Panic Output

# azurerm_site_recovery_replicated_vm.xx must be replaced
-/+ resource "azurerm_site_recovery_replicated_vm" "xx" {
      ~ id                                        = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.RecoveryServices/vaults/xx/replicationFabrics/xx/replicationProtectionContainers/xx/replicationProtectedItems/xx" -> (known after apply)
      ~ managed_disk                              = [
          - {
              - disk_id                       = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Compute/disks/xx"
              - staging_storage_account_id    = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Storage/storageAccounts/xx"
              - target_disk_encryption        = []
              - target_disk_encryption_set_id = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Compute/diskEncryptionSets/xx"
              - target_disk_type              = "Premium_LRS"
              - target_replica_disk_type      = "Premium_LRS"
              - target_resource_group_id      = "/subscriptions/xx/resourceGroups/xx"
            },
        ]
        name                                      = "xx"
      ~ network_interface                         = [
          - {
              - failover_test_subnet_name          = "xx"
              - is_primary                         = false
              - source_network_interface_id        = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Network/networkInterfaces/xx"
              - target_subnet_name                 = "xx"
                # (4 unchanged attributes hidden)
            },
          + {
              + failover_test_public_ip_address_id = (known after apply)
              + failover_test_static_ip            = (known after apply)
              + failover_test_subnet_name          = "xx"
              + is_primary                         = false
              + source_network_interface_id        = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Network/networkInterfaces/xx"
              + target_subnet_name                 = "xx"
                # (2 unchanged attributes hidden)
            },
        ]
      ~ target_network_id                         = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Network/virtualNetworks/xx" -> (known after apply)
      ~ test_network_id                           = "/subscriptions/xx/resourceGroups/xx/providers/Microsoft.Network/virtualNetworks/xx" -> (known after apply)
      + unmanaged_disk                            = (known after apply) # forces replacement
        # (17 unchanged attributes hidden)
    }

Expected Behaviour

No changes. Infrastructure is up to date

Actual Behaviour

The managed_disk block wants to replace the disks, the network_interface block wants to replace networking and it wants to add the unmanaged_disk block which will force replacement even though we are only using managed disks

Steps to Reproduce

  1. Pin provider to version 3.114.0
  2. terraform init
  3. terraform plan
    Reverting to the previous provider version 3.113.0 avoids the drifts/destruction

Important Factoids

No response

References

#26822

@enorlando
Copy link
Author

@rcskosir have you had a chance to review this issue we are currently experiencing with provider version 3.114.0?

@enorlando

This comment was marked as duplicate.

@enorlando
Copy link
Author

Hi @rcskosir we have updated to the latest version of the azurerm provider major release 4.0.1 and we still have this issue. May you please provide us any insight on how we can get this fixed? Thanks

@rcskosir rcskosir added the v/4.x label Aug 29, 2024
@rcskosir
Copy link
Contributor

👋 @enorlando Thanks for reaching out, unfortunately I do not have an ETA on this bug. Any future work via the team or the community should end up linked here via a PR.

@enorlando
Copy link
Author

enorlando commented Sep 2, 2024

@jackofallops Do you see any reason why we are coming up with this behaviour? It seems to have started after the PR #26822 was merged as part of v3.14.0 milestone. Reverting back to the previous version removes the behaviour. We are currently using v4.0.1 of the provider and still experiencing the same. Removing from state and importing still has the same issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants