Open
Description
Terraform Core Version
1.1.6
AWS Provider Version
5.53.0
Affected Resource(s)
- aws_rds_cluster
Expected Behavior
I should be able to create the resource aws_rds_cluster with the provided configuration
Actual Behavior
I get an error.
I have also tried adding:
enable_local_write_forwarding = null
or
enable_local_write_forwarding = false
but I still get the error
Relevant Error/Panic Output Snippet
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for
│ module.esazione_db_cluster_dr.aws_rds_cluster.cluster to include new values
│ learned so far during apply, provider "registry.terraform.io/hashicorp/aws"
│ produced an invalid new value for .enable_local_write_forwarding: was
│ cty.False, but now null.
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵
Terraform Configuration Files
resource "aws_rds_cluster" "cluster" {
cluster_identifier = var.name_cluster
engine = "aurora-postgresql"
engine_version = var.engine_version
database_name = var.cross_region_replication_cluster ? null : substr(var.name_cluster, 1, -1)
master_username = x
master_password = x
db_subnet_group_name = aws_db_subnet_group.subnet_group.name
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.cluster_parameter_group.name
backup_retention_period = var.variable_per_environment[var.environment].backup_retantion
storage_encrypted = true
skip_final_snapshot = var.skip_final_snapshot
preferred_maintenance_window = var.preferred_maintenance_window
vpc_security_group_ids = var.aditional_security_group_id != "" ? [aws_security_group.security_group.id, var.aditional_security_group_id] : [aws_security_group.security_group.id]
enabled_cloudwatch_logs_exports = ["postgresql"]
deletion_protection = var.deletion_protection
apply_immediately = var.apply_immediately
allow_major_version_upgrade = var.allow_major_version_upgrade
global_cluster_identifier = var.global_cluster_identifier
kms_key_id = var.cross_region_replication_cluster ? aws_kms_key.replica[0].arn : null
lifecycle {
ignore_changes = [global_cluster_identifier]
}
replication_source_identifier = var.replication_source_identifier
domain = var.enable_kerberos ? var.domain_ad : null
domain_iam_role_name = var.enable_kerberos ? aws_iam_role.cluster_iam_role.name : null
}
Steps to Reproduce
- deploy the resource using the provided configuration and terraform version
Debug Output
No response
Panic Output
No response
Important Factoids
No response
References
No response
Would you like to implement a fix?
None