-
Notifications
You must be signed in to change notification settings - Fork 9.8k
Description
Terraform Core Version
1.5.4
AWS Provider Version
4.67.0
Affected Resource(s)
aws_emr_instance_group
Expected Behavior
aws_emr_instance_group configured with instance_count = 0 should create the instance group with no instances actually being created
Actual Behavior
aws_emr_instance_group creates an instance group, but spins up an instance. As soon as the instance finishes creation an autoscaling event destroys it to obey the instance_count=0 configuration. This also causes apply stage to run for much longer than required as it waits for the instance creation and subsequent destroy before achieving its desired state and completing the apply operation.
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
resource "aws_emr_cluster" "cluster" {
name = "emr-test-cluster"
...
}
resource "aws_emr_instance_group" "task_1" {
cluster_id = aws_emr_cluster.cluster.id
instance_count = 0
instance_type = "m5.xlarge"
bid_price = 0.5
name = "config_1"
}
resource "aws_emr_instance_group" "task_2" {
cluster_id = aws_emr_cluster.cluster.id
instance_count = 0
instance_type = "m5.x2large"
bid_price = 1.0
name = "config_2"
}
Steps to Reproduce
configure emr cluster with a number of aws_emr_instance_group resources with various instance types and instance_count=0
terraform init
terraform apply
Debug Output
No response
Panic Output
No response
Important Factoids
We run emr for structured streaming and clusters are long running. For a number of reasons in this situation we cannot run instance_fleets, and instead run with multiple instance groups of different instance types using spot instances. For nonprod environments we deploy the clusters with just master and core nodes and configure these task instance groups to able to scale on deployment of jobs. This works fine in most instances, but we do see these task groups spinning up task instances at apply (and we pay have to pay for those, minimally admittedly) and occasionally the resize operation takes so long our deployment pipelines timeout.
I have raised this with AWS support and the can see that API calls are being executed as requested
References
No response
Would you like to implement a fix?
No