Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform destroy for cloudscale.ch fails at the first time #21

Open
ccremer opened this issue Jul 19, 2021 · 0 comments
Open

Terraform destroy for cloudscale.ch fails at the first time #21

ccremer opened this issue Jul 19, 2021 · 0 comments
Labels
bug Something isn't working

Comments

@ccremer
Copy link
Contributor

ccremer commented Jul 19, 2021

With appuio/openshift4-docs#96 implemented, terraform destroy takes 2 times to complete.
The first time the error is something like

module.cluster.cloudscale_subnet.privnet_subnet: Destroying... [id=f0c8cbf0-5d5e-48b8-9379-140cf19f7330]
module.cluster.cloudscale_server.lb[0]: Destroying... [id=6e8f4c10-00f3-45c8-99f2-67ffbd772ec6]
module.cluster.cloudscale_server.lb[1]: Destroying... [id=c42a7f61-eef3-4932-bc0c-9bb69d5b3d62]
module.cluster.cloudscale_server.lb[1]: Destruction complete after 7s
module.cluster.cloudscale_server.lb[0]: Still destroying... [id=6e8f4c10-00f3-45c8-99f2-67ffbd772ec6, 10s elapsed]
module.cluster.cloudscale_server.lb[0]: Destruction complete after 10s
module.cluster.cloudscale_server_group.lb[0]: Destroying... [id=c39aa9ae-5aaa-43ca-b6a2-9b99f992eedf]
module.cluster.null_resource.register_lb[1]: Destroying... [id=3075016582769078992]
module.cluster.null_resource.register_lb[0]: Destroying... [id=357206802485481663]
module.cluster.local_file.lb_hieradata[0]: Destroying... [id=643ab3342838dac0aaca5a9d30a947f9d9ea52a1]
module.cluster.null_resource.register_lb[0]: Destruction complete after 0s
module.cluster.null_resource.register_lb[1]: Destruction complete after 0s
module.cluster.local_file.lb_hieradata[0]: Destruction complete after 0s
module.cluster.random_id.lb[0]: Destroying... [id=hg]
module.cluster.cloudscale_floating_ip.nat_vip[0]: Destroying... [id=5.102.151.35]
module.cluster.random_id.lb[1]: Destroying... [id=HA]
module.cluster.gitfile_checkout.appuio_hieradata[0]: Destroying... [id=./appuio_hieradata]
module.cluster.cloudscale_floating_ip.router_vip[0]: Destroying... [id=5.102.150.209]
module.cluster.random_id.lb[0]: Destruction complete after 0s
module.cluster.random_id.lb[1]: Destruction complete after 0s
module.cluster.cloudscale_floating_ip.api_vip[0]: Destroying... [id=5.102.151.109]
module.cluster.cloudscale_server_group.lb[0]: Destruction complete after 2s
module.cluster.cloudscale_floating_ip.router_vip[0]: Destruction complete after 3s
module.cluster.cloudscale_floating_ip.api_vip[0]: Destruction complete after 4s
module.cluster.cloudscale_floating_ip.nat_vip[0]: Destruction complete after 4s

Error: Error while running git pull --ff-only origin: exit status 128
Working dir: ./appuio_hieradata
Output: From https://git.vshn.net/appuio/appuio_hieradata
   6e37293..7f44eca  master     -> origin/master
fatal: Not possible to fast-forward, aborting.

Error: Error deleting subnet f0c8cbf0-5d5e-48b8-9379-140cf19f7330: detail: There are still one or more interfaces in this subnet.

After running terraform destroy again, it wants to delete the following resources:

Terraform will perform the following actions:

  # module.cluster.cloudscale_network.privnet will be destroyed
  - resource "cloudscale_network" "privnet" {
      - auto_create_ipv4_subnet = false -> null
      - href                    = "https://api.cloudscale.ch/v1/networks/9bf5f692-3a2a-429a-9caa-c3a2a83939d8" -> null
      - id                      = "9bf5f692-3a2a-429a-9caa-c3a2a83939d8" -> null
      - mtu                     = 9000 -> null
      - name                    = "privnet-c-falling-shadow-3833" -> null
      - subnets                 = [] -> null
      - zone_slug               = "rma1" -> null
    }

  # module.cluster.cloudscale_subnet.privnet_subnet will be destroyed
  - resource "cloudscale_subnet" "privnet_subnet" {
      - cidr            = "172.18.200.0/24" -> null
      - dns_servers     = [
          - "5.102.144.101",
          - "5.102.144.102",
        ] -> null
      - gateway_address = "172.18.200.1" -> null
      - href            = "https://api.cloudscale.ch/v1/subnets/f0c8cbf0-5d5e-48b8-9379-140cf19f7330" -> null
      - id              = "f0c8cbf0-5d5e-48b8-9379-140cf19f7330" -> null
      - network_href    = "https://api.cloudscale.ch/v1/networks/9bf5f692-3a2a-429a-9caa-c3a2a83939d8" -> null
      - network_name    = "privnet-c-falling-shadow-3833" -> null
      - network_uuid    = "9bf5f692-3a2a-429a-9caa-c3a2a83939d8" -> null
    }

  # module.cluster.gitfile_checkout.appuio_hieradata[0] will be destroyed
  - resource "gitfile_checkout" "appuio_hieradata" {
      - branch = "tf/lbaas/c-falling-shadow-3833" -> null
      - head   = "6e372938eb594ccd56988ca14866db4f8f74b507" -> null
      - id     = "./appuio_hieradata" -> null
      - path   = "./appuio_hieradata" -> null
      - repo   = "https://project_368_bot@git.vshn.net/appuio/appuio_hieradata.git" -> null
    }

Plan: 0 to add, 0 to change, 3 to destroy.

I suspect it seems it fails to destroy a private subnet if a VM is also being destroyed right in the same time.

Workaround

Invoke terraform destroy 2 times, that successfully removes all resources.

@ccremer ccremer transferred this issue from appuio/openshift4-docs Jul 20, 2021
@ccremer ccremer added the bug Something isn't working label Jul 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant