You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whenever changes are made to the (default) node pool autoscaling parameters within the digitalocean_kubernetes_cluster resource, terraform loses all the information about resources that were created within the cluster. This breaks the terraform state to the point where everything needs to be destroyed & re-applied to fix it.
Let's say I've created digitalocean_kubernetes_cluster resource and along with that, added several kubernetes resources within the same cluster using terraform. If I change the autoscaling parameters of the default node pool within the digitalocean_kubernetes_cluster resource, terraform loses all the information about kubernetes resources that were created in the cluster and tries to apply them again, resulting in numerous resource "already exists" errors, because they are still present on the cluster, just terraform has lost all the information about them.
(NOTE: Autoscaling changes are applied correctly)
Affected Resource(s)
digitalocean_kubernetes_cluster
Expected Behavior
Node pool autoscaling changes should be applied without causing terraform to lose information about other resources created within the cluster.
Actual Behavior
All the kubernetes resources created within the cluster are lost from terraform state.
Steps to Reproduce
Create a digitalocean_kubernetes_cluster
Add resources within the cluster using kubernetes provider
Edit digitalocean_kubernetes_cluster node pool autoscaling parameters & apply changes
Terraform Configuration Files
resource "digitalocean_kubernetes_cluster" "primary" {
name = var.cluster_name
region = var.cluster_region
version = data.digitalocean_kubernetes_versions.current.latest_version
vpc_uuid = digitalocean_vpc.cluster_vpc.id
node_pool {
name = "${var.cluster_name}-node-pool"
size = var.worker_size
auto_scale = true
min_nodes = 1
max_nodes = var.max_worker_count ## Issue occurs when this value is changed and re-applied
tags = [local.cluster_id_tag]
}
}
Additional context
Using terraform version 1.1.5, digitalocean provider 2.17.1, kubernetes provider 2.8.0
The text was updated successfully, but these errors were encountered:
This is related to: #424 because terraform looses provider connection data when you are changing node pool size. The lifecycle will want to remove and create new cluster and though at this point data in kubernetes provider retrieved from digitalocean_kubernetes_cluster resource is unknown (because will be known after apply) in terraform run resulting in error.
Bug Report
Describe the bug
Whenever changes are made to the (default) node pool autoscaling parameters within the
digitalocean_kubernetes_cluster
resource, terraform loses all the information about resources that were created within the cluster. This breaks the terraform state to the point where everything needs to be destroyed & re-applied to fix it.Let's say I've created
digitalocean_kubernetes_cluster
resource and along with that, added severalkubernetes
resources within the same cluster using terraform. If I change the autoscaling parameters of the default node pool within thedigitalocean_kubernetes_cluster
resource, terraform loses all the information aboutkubernetes
resources that were created in the cluster and tries to apply them again, resulting in numerous resource "already exists" errors, because they are still present on the cluster, just terraform has lost all the information about them.(NOTE: Autoscaling changes are applied correctly)
Affected Resource(s)
Expected Behavior
Node pool autoscaling changes should be applied without causing terraform to lose information about other resources created within the cluster.
Actual Behavior
All the
kubernetes
resources created within the cluster are lost from terraform state.Steps to Reproduce
digitalocean_kubernetes_cluster
kubernetes
providerdigitalocean_kubernetes_cluster
node pool autoscaling parameters & apply changesTerraform Configuration Files
Additional context
Using
terraform
version 1.1.5,digitalocean
provider 2.17.1,kubernetes
provider 2.8.0The text was updated successfully, but these errors were encountered: