Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to delete cluster after using import_on_create for default worker pool #5598

Open
Aashiq-J opened this issue Sep 2, 2024 · 1 comment
Labels
service/Kubernetes Service Issues related to Kubernetes Service Issues

Comments

@Aashiq-J
Copy link
Contributor

Aashiq-J commented Sep 2, 2024

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform IBM Provider Version

Terraform : 1.6
Provider : 1.68.1

Affected Resource(s)

  • ibm_container_vpc_worker_pool

Terraform Configuration Files

https://github.com/terraform-ibm-modules/terraform-ibm-base-ocp-vpc/tree/new-default/examples/advanced

You can run the above example and try to run terraform destroy, it should fail with the below error.

Debug Output

╷
│ Error: Request failed with status code: 400, ServerErrorResponse: {"incidentID":"812a6a0d-1d82-40f6-b1d9-e70971f3d1ff,812a6a0d-1d82-40f6-b1d9-e70971f3d1ff","code":"E1437","description":"The action cannot be completed because it would reduce the worker pool's size below the cluster's minimum requirement of 2 workers. If there are any worker operations pending, you might need wait for them to complete.","type":"BadRequest"}
│ 
│ ---
│ id: terraform-74d077da
│ summary: 'Request failed with status code: 400, ServerErrorResponse:
│ {"incidentID":"812a6a0d-1d82-40f6-b1d9-e70971f3d1ff,812a6a0d-1d82-40f6-b1d9-e70971f3d1ff","code":"E1437","description":"The
│   action cannot be completed because it would reduce the worker pool''s size below
│   the cluster''s minimum requirement of 2 workers. If there are any worker operations
│   pending, you might need wait for them to complete.","type":"BadRequest"}'
│ severity: error
│ resource: ibm_container_vpc_worker_pool
│ operation: delete
│ component:
│   name: github.com/IBM-Cloud/terraform-provider-ibm
│   version: 1.68.1
│ ---
│ 
╵

Panic Output

Expected Behavior

terraform Destroy should complete successfully.

Actual Behavior

terraform destroy fails

Steps to Reproduce

This error only occurs when we have only default worker pool and no other worker_pools.

  1. terraform apply
  2. terraform destroy

Important Factoids

References

  • #0000
@github-actions github-actions bot added the service/Kubernetes Service Issues related to Kubernetes Service Issues label Sep 2, 2024
@TwoDCube
Copy link
Contributor

TwoDCube commented Sep 2, 2024

This is expected, you can't have 0 worker clusters today.
https://cloud.ibm.com/docs/containers?topic=containers-faqs#smallest_cluster

Note that you can't have a cluster with 0 worker nodes, and you can't power off or suspend billing for your worker nodes.

You can manually add a worker pool outside of terraform or remove one of the worker pools from the terraform state before running the destroy should you wish to import the default workerpool.

I acknowledge this is not ideal, but right now this is the best you can do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
service/Kubernetes Service Issues related to Kubernetes Service Issues
Projects
None yet
Development

No branches or pull requests

2 participants