Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_kusto_cluster_managed_private_endpoint crates an error when kusto cluster is in process of maintenance #22010

Open
1 task done
christiansalathe-art opened this issue Jun 1, 2023 · 2 comments

Comments

@christiansalathe-art
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

1.2.3

AzureRM Provider Version

3.58.0

Affected Resource(s)/Data Source(s)

azurerm_kusto_cluster_managed_private_endpoint

Terraform Configuration Files

#this code is just for showing and not able to run
resource "azurerm_kusto_cluster" "dataexplorer" {
  for_each                      = var.dec_multi
  provider                      = azurerm.multi
  name                          = each.value.adx_name
  resource_group_name           = each.value.adx_rg
  location                      = var.location
  engine                        = "V3"
  zones                         = [1,2]
  public_network_access_enabled = each.value.adx_public
  streaming_ingestion_enabled   = each.value.streaming_ingestion_enabled
  purge_enabled                 = each.value.adx_purge_enabled
  auto_stop_enabled             = each.value.adx_auto_stop_enabled
#  language_extensions           = [each.value.language_extensions] # doesn't work with cmk encryption
  disk_encryption_enabled       = true
  tags                          = var.tags_dec
  allowed_ip_ranges             = [each.value.iprange]

  sku {
    name     = each.value.adx_sku
    capacity = each.value.adx_capacity
  }

  identity {
    type = "SystemAssigned"
  }

}

resource "azurerm_kusto_cluster_managed_private_endpoint" "dec-managed-pe-blob" {
  for_each                     = { for k in compact([for k, v in var.dec_multi: v.managed_private_endpoint_blob ? k : ""]): k => var.dec_multi[k] } # k=key v=value
  provider                     = azurerm.multi
  name                         = "managed-pe-${azurerm_kusto_cluster.dataexplorer[each.key].name}-blob"
  resource_group_name          = each.value.adx_rg
  cluster_name                 = azurerm_kusto_cluster.dataexplorer[each.key].name
  private_link_resource_id     = each.value.managed_private_endpoint_resid
  group_id                     = "blob"
  request_message              = "Please approve"

    depends_on = [
      azurerm_private_endpoint.dec-pe,
      null_resource.rotation_policy
    ]
}


# time sleep because if issue with kusto cluster mainenance task
resource "time_sleep" "wait_seconds" {
  depends_on = [azurerm_kusto_cluster_managed_private_endpoint.dec-managed-pe-blob]
  create_duration = "600s"
}

resource "azurerm_kusto_cluster_managed_private_endpoint" "dec-managed-pe-dfs" {
  for_each                     = { for k in compact([for k, v in var.dec_multi: v.managed_private_endpoint_dfs ? k : ""]): k => var.dec_multi[k] } # k=key v=value
  provider                     = azurerm.multi
  name                         = "managed-pe-${azurerm_kusto_cluster.dataexplorer[each.key].name}-dfs"
  resource_group_name          = each.value.adx_rg
  cluster_name                 = azurerm_kusto_cluster.dataexplorer[each.key].name
  private_link_resource_id     = each.value.managed_private_endpoint_resid
  group_id                     = "dfs"
  request_message              = "Please approve"

    depends_on = [
      time_sleep.wait_seconds
    ]
}

Debug Output/Panic Output

module.modul-dataexplorer.azurerm_kusto_cluster_managed_private_endpoint.dec-managed-pe-dfs["adx2"]: Still creating... [2m0s elapsed]
module.modul-dataexplorer.azurerm_kusto_cluster_managed_private_endpoint.dec-managed-pe-dfs["adx2"]: Creation complete after 2m2s [id=/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/rg-bat-1/providers/Microsoft.Kusto/clusters/iiiiiiiiii/managedPrivateEndpoints/managed-pe-iiiiiiiiii-dfs]
?
¦ Error: creating/updating Managed Private Endpoint (Subscription: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
¦ Resource Group Name: "rg-bat-1"
¦ Cluster Name: "iiiiiiiiii"
¦ Managed Private Endpoint Name: "managed-pe-adxbat9t9hzj3c-blob"): polling after CreateOrUpdate: Code="ServiceIsInMaintenance" Message="[Conflict] Cluster 'adxbat9t9hzj3c' is in process of maintenance for a short period. You may retry to invoke the operation in a few minutes."
¦
¦   with module.modul-dataexplorer.azurerm_kusto_cluster_managed_private_endpoint.dec-managed-pe-blob["adx2"],
¦   on .terraform/modules/modul-dataexplorer/resources.tf line 88, in resource "azurerm_kusto_cluster_managed_private_endpoint" "dec-managed-pe-blob":
¦   88: resource "azurerm_kusto_cluster_managed_private_endpoint" "dec-managed-pe-blob" {
¦

Expected Behaviour

the terraform provider azurerm_kusto_cluster_managed_private_endpoint should not throw an error when the kust cluster s in process of maintenance

Actual Behaviour

the provider azurerm_kusto_cluster_managed_private_endpoint throw an error when the Cluster is in process of maintenance

the working workaround ist the time_sleep between creating the first and the second azurerm_kusto_cluster_managed_private_endpoint

Steps to Reproduce

create more than one azurerm_kusto_cluster_managed_private_endpoint for a azure data explorer kusto cluster

Important Factoids

no

References

no

@liuwuliuyun
Copy link
Contributor

liuwuliuyun commented Jun 5, 2023

Hi @rcskosir thanks for raising this issue. I am able to reproduce your problem locally. I will try to work on a solution. Any suggestion from anyone on this issue is welcomed. Since this is related to Kusto API, currently the best workaround I could think of is the same proposed time_sleep mentioned in the description.

@Ben-Duf
Copy link

Ben-Duf commented Aug 29, 2023

Hi, I have two azurerm_kusto_cluster_managed_private_endpoint also. I must wait the first one to finish before creating the second.

resource "azurerm_kusto_cluster_managed_private_endpoint" "dec-managed-pe-blob" {
...
}
resource "azurerm_kusto_cluster_managed_private_endpoint" "dec-managed-pe-dfs" {
   depends_on = [ 
      azurerm_kusto_cluster_managed_private_endpoint.dec-managed-pe-blob
   ]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants