Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't delete deployments #763

Closed
rawdaGastan opened this issue Oct 8, 2023 · 4 comments
Closed

can't delete deployments #763

rawdaGastan opened this issue Oct 8, 2023 · 4 comments
Labels
type_bug Something isn't working

Comments

@rawdaGastan
Copy link
Collaborator

Some nodes get down then while deleting their deployments the plugin fails:

  • we need to report down nodes regularly
  • there shouldn't be a contract billing for a deployment against node of state down
@s-areal
Copy link

s-areal commented Oct 9, 2023

If you have some nodes running for a while, you know, that you have to keep an eye if the nodes are up.
I am running nodes for months and maybe once in a week i have to delete some vms because this nodes are no longer available or I cannot establish a connection via Ygg.
normal procedure for me is deleting this nodes (that i assume were shutdown by the farmer or are offline) on main.tf and run a terrafrom apply to delete these node, and avoid being billed.
The fact is that I am having problems deleting this nodes on terraform - I am using .

terraform {
required_providers {
grid = {
source = "threefoldtech/grid"
version = "1.9.2"
}
}
}

i will try to summarize the problem and what is happening this it's hard to report.
This is an example:

  1. I have a total of 64 different nodes on main.tf
  2. I want to delete node 5017. I remove this node on main.tf and apply.
  3. The node vm on node 5017 is deleted. correct
  4. I delete node 6019 and 5591 on main.tf and and apply.

    │ Error: Plugin did not respond

    │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.

Stack trace from the terraform-provider-grid_v1.9.2 plugin:

fatal error: concurrent map read and map write

goroutine 59 [running]:
runtime.throw({0x1a44f48?, 0x18bbf40?})
runtime/panic.go:992 +0x71 fp=0xc000a1a9c8 sp=0xc000a1a998 pc=0x1032f91
runtime.mapaccess2_faststr(0x104d6a8?, 0x302e34?, {0xc000630300, 0x12})
runtime/map_faststr.go:117 +0x3d4 fp=0xc000a1aa30 sp=0xc000a1a9c8 pc=0x1012954
github.com/threefoldtech/tfgrid-sdk-go/grid-client/state.NetworkState.GetNetwork(...)
github.com/threefoldtech/tfgrid-sdk-go/grid-client@v0.11.1/state/network_state.go:31
github.com/threefoldtech/tfgrid-sdk-go/grid-client/state.NetworkState.UpdateNetworkSubnets(0x195ea00?, {0xc000630300, 0x12}, 0x12?)
github.com/threefoldtech/tfgrid-sdk-go/grid-client@v0.11.1/state/network_state.go:40 +0x65 fp=0xc000a1ab68 sp=0xc000a1aa30 pc=0x17bf0e5
github.com/threefoldtech/terraform-provider-grid/internal/provider.updateNetworkLocalState(...)
github.com/threefoldtech/terraform-provider-grid/internal/provider/resource_network.go:276
github.com/threefoldtech/terraform-provider-grid/internal/provider.storeState(0xc0004d87d0?, 0xc0004d8700, 0xc000a920e0)
github.com/threefoldtech/terraform-provider-grid/internal/provider/resource_network.go:218 +0x605 fp=0xc000a1adf8 sp=0xc000a1ab68 pc=0x1822dc5
github.com/threefoldtech/terraform-provider-grid/internal/provider.resourceNetworkDelete({0x1c6b028, 0xc00066c900}, 0x0?, {0x1929940?, 0xc0004d8700?})
github.com/threefoldtech/terraform-provider-grid/internal/provider/resource_network.go:389 +0x1c8 fp=0xc000a1ae70 sp=0xc000a1adf8 pc=0x1824228
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).delete(0xc000239420, {0x1c6b060, 0xc00062bcb0}, 0xd?, {0x1929940, 0xc0004d8700})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.29.0/helper/schema/resource.go:829 +0x12e fp=0xc000a1aee8 sp=0xc000a1ae70 pc=0x16a776e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc000239420, {0x1c6b060, 0xc00062bcb0}, 0xc000299ba0, 0xc0000f1c00, {0x1929940, 0xc0004d8700})
github.com/hashicorp/terraform-plugin-sdk/v2@v2.29.0/helper/schema/resource.go:878 +0x609 fp=0xc000a1b268 sp=0xc000a1aee8 pc=0x16a7ec9
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc0004534d0, {0x1c6b060?, 0xc00062bb90?}, 0xc0004f4ff0)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.29.0/helper/schema/grpc_provider.go:1060 +0xe3c fp=0xc000a1b508 sp=0xc000a1b268 pc=0x16a09fc
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc00046c8c0, {0x1c6b060?, 0xc00062b380?}, 0xc0005b07e0)
github.com/hashicorp/terraform-plugin-go@v0.19.0/tfprotov5/tf5server/server.go:859 +0x574 fp=0xc000a1b9d8 sp=0xc000a1b508 pc=0x1556fb4
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x19f3fe0?, 0xc00046c8c0}, {0x1c6b060, 0xc00062b380}, 0xc0005b0690, 0x0)
github.com/hashicorp/terraform-plugin-go@v0.19.0/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:467 +0x170 fp=0xc000a1ba30 sp=0xc000a1b9d8 pc=0x153b5f0
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001d81e0, {0x1c6e6a0, 0xc000082680}, 0xc00056d7a0, 0xc000464ed0, 0x21f52d8, 0x0)
google.golang.org/grpc@v1.57.0/server.go:1360 +0xe13 fp=0xc000a1be48 sp=0xc000a1ba30 pc=0x149db33
google.golang.org/grpc.(*Server).handleStream(0xc0001d81e0, {0x1c6e6a0, 0xc000082680}, 0xc00056d7a0, 0x0)
google.golang.org/grpc@v1.57.0/server.go:1737 +0xa1b fp=0xc000a1bf68 sp=0xc000a1be48 pc=0x14a2b5b
google.golang.org/grpc.(*Server).serveStreams.func1.1()
google.golang.org/grpc@v1.57.0/server.go:982 +0x98 fp=0xc000a1bfe0 sp=0xc000a1bf68 pc=0x149b498
runtime.goexit()
runtime/asm_amd64.s:1571 +0x1 fp=0xc000a1bfe8 sp=0xc000a1bfe0 pc=0x1063501
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/grpc@v1.57.0/server.go:980 +0x18c

goroutine 1 [select]:
github.com/hashicorp/go-plugin.Serve(0xc000102c60)
github.com/hashicorp/go-plugin@v1.5.1/server.go:474 +0x1477
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.Serve({0x1a2c7c3, 0x8}, 0xc0002b5660, {0x0, 0x0, 0x0})
github.com/hashicorp/terraform-plugin-go@v0.19.0/tfprotov5/tf5server/server.go:315 +0xb8a
github.com/hashicorp/terraform-plugin-sdk/v2/plugin.tf5serverServe(0xc000102c00)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.29.0/plugin/serve.go:188 +0x518
github.com/hashicorp/terraform-plugin-sdk/v2/plugin.Serve(0xc000102c00)
github.com/hashicorp/terraform-plugin-sdk/v2@v2.29.0/plugin/serve.go:128 +0x18d
main.main()
github.com/threefoldtech/terraform-provider-grid/main.go:57 +0x238

goroutine 35 [select]:
github.com/hashicorp/go-plugin.(*gRPCBrokerServer).Recv(0x1a68700?)
github.com/hashicorp/go-plugin@v1.5.1/grpc_broker.go:125 +0x67
github.com/hashicorp/go-plugin.(*GRPCBroker).Run(0xc000161500)
github.com/hashicorp/go-plugin@v1.5.1/grpc_broker.go:437 +0x44
created by github.com/hashicorp/go-plugin.(*GRPCServer).Init
github.com/hashicorp/go-plugin@v1.5.1/grpc_server.go:88 +0x4f6

goroutine 36 [IO wait]:
internal/poll.runtime_pollWait(0x29a99850, 0x72)
runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0001030e0?, 0xc0000bc000?, 0x1)
internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0001030e0, {0xc0000bc000, 0x1000, 0x1000})
internal/poll/fd_unix.go:167 +0x25a
os.(*File).read(...)
os/file_posix.go:31
os.(*File).Read(0xc000100c68, {0xc0000bc000?, 0x400?, 0x18bf800?})
os/file.go:119 +0x5e
bufio.(*Reader).Read(0xc00010b740, {0xc0000be000, 0x400, 0x0?})
bufio/bufio.go:236 +0x1b4
github.com/hashicorp/go-plugin.copyChan({0x1c6fdd0, 0xc000267860}, 0x0?, {0x1c66320?, 0xc000100c68?})
github.com/hashicorp/go-plugin@v1.5.1/grpc_stdio.go:184 +0x1f6
created by github.com/hashicorp/go-plugin.newGRPCStdioServer
github.com/hashicorp/go-plugin@v1.5.1/grpc_stdio.go:40 +0xf5

goroutine 37 [IO wait]:
internal/poll.runtime_pollWait(0x29a99670, 0x72)
runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0001031a0?, 0xc0000bd000?, 0x1)
internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0001031a0, {0xc0000bd000, 0x1000, 0x1000})
internal/poll/fd_unix.go:167 +0x25a
os.(*File).read(...)
os/file_posix.go:31
os.(*File).Read(0xc000100c78, {0xc0000bd000?, 0x400?, 0x18bf800?})
os/file.go:119 +0x5e
bufio.(*Reader).Read(0xc00010bf40, {0xc0000be400, 0x400, 0x0?})
bufio/bufio.go:236 +0x1b4
github.com/hashicorp/go-plugin.copyChan({0x1c6fdd0, 0xc000267860}, 0x0?, {0x1c66320?, 0xc000100c78?})
github.com/hashicorp/go-plugin@v1.5.1/grpc_stdio.go:184 +0x1f6
created by github.com/hashicorp/go-plugin.newGRPCStdioServer
github.com/hashicorp/go-plugin@v1.5.1/grpc_stdio.go:41 +0x185

goroutine 19 [syscall]:
os/signal.signal_recv()
runtime/sigqueue.go:148 +0x28

Error: The terraform-provider-grid_v1.9.2 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

the plugin crashes.
But if I run a terraform output, the nodes are not present on the output
5. even though if i perform again terraform apply,
i can see in the output

grid_network.netcloudnewUK_6019 will be destroyed

(because grid_network.netcloudnewUK_6019 is not in configuration)

  • resource "grid_network" "netcloudnewUK_6019" {
    • access_wg_config = <<-EOT
      (...)

grid_network.netcloudnewUK_5591 will be destroyed

(because grid_network.netcloudnewUK_5591 is not in configuration)

  • resource "grid_network" "netcloudnewUK_5591" {
    • add_wg_access = true -> null
    • description = "SomeNetwork" -> null
    • external_ip = "10.32.2.0/24" -> null
      (....)

@s-areal
Copy link

s-areal commented Oct 9, 2023

I must add that when you try to deploy a large number of nodes, a 5/10% of nodes just fail, even retrying to run terraform apply a couple of times. When this happens, I just remove those nodes form main.tf and re(apply). This nodes are obviously with problems of offline even though they are tagged as online, so it's not possible to deploy.
I found out that some of these nodes, that I tried to deploy before, are a still showing up on the output of terraform along side the node a just tried to delete. please see below.

As far as I remember, this nodes where added after the update to 1.9.2

│ Warning: Error reading data from remote, terraform state might be out of sync with the remote state

│ failed to get deployment objects: failed to get deployment 48182 of node 6019: context deadline exceeded


│ Warning: Error reading data from remote, terraform state might be out of sync with the remote state

│ failed to get deployment objects: failed to get deployment 45913 of node 1264: context deadline exceeded


│ Warning: Error reading data from remote, terraform state might be out of sync with the remote state

│ failed to get deployment objects: failed to get deployment 45916 of node 6018: context deadline exceeded


│ Warning: Error reading data from remote, terraform state might be out of sync with the remote state

│ failed to get deployment objects: failed to get deployment 48223 of node 6017: context deadline exceeded

@s-areal
Copy link

s-areal commented Oct 9, 2023

I have some other example of a basic command like terraform destroy of 3 VM. I cannot access any of the vms Here is the ful log trying to destroy those:

terraform output
ygg_ip_0174 = "300:7f14:f56:3a66:4c51:3632:2002:bb6c"
ygg_ip_2519 = "300:a760:a994:f5c0:a801:a74c:1974:8d6e"
ygg_ip_2719 = "302:398a:cf61:e7d6:b8a0:e377:37fd:7080"
➜ au ls
create5.sh main.tf state.json terraform.tfstate terraform.tfstate.backup
➜ au terraform destroy -parallelism=20 -auto-approve -var-file=/Users/simaoareal/terra/env/env.tfvars
grid_network.netcloudAU_0174: Refreshing state... [id=acd401f6-4727-4e80-be79-0b05d106ea29]
grid_network.netcloudAU_2719: Refreshing state... [id=c0326f67-ec1f-4cf1-afc6-7ab9e94abec4]
grid_network.netcloudAU_2519: Refreshing state... [id=3adc5a62-296d-4b08-a80d-fe59a2673aec]
grid_deployment._2719: Refreshing state... [id=39678]
grid_deployment._0174: Refreshing state... [id=39684]
grid_deployment._2519: Refreshing state... [id=39689]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

  • destroy

Terraform will perform the following actions:

grid_deployment._0174 will be destroyed

  • resource "grid_deployment" "_0174" {
    • id = "39684" -> null

    • ip_range = "10.32.2.0/24" -> null

    • name = "vm" -> null

    • network_name = "netcloudAU_0174" -> null

    • node = 174 -> null

    • solution_provider = 0 -> null

    • solution_type = "Virtual Machine" -> null

    • disks {

      • name = "data" -> null
      • size = 8 -> null
        }
    • vms {

      • corex = false -> null

      • cpu = 1 -> null

      • description = "VM0174" -> null

      • env_vars = {

        • "SSH_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPd+6GfEw3wCIb/QlIQ+HBL6IuJfMc9CS1qg8h+C4qlTyQQyTKl+hsGXYiDd6gP33nG7sTgI9Y/WrjfCzRI+REqvjp3o/+VUE6K2MjOb1z+Zsl0w2/iBSuCoe+jieIWPzQQUbyCHdBPo21feVm5vWuBzVMXbuuSeI2ZZqCmUK75D85pxg60pEdEOvEKdH96Tgnb5d3jCU0YvrUcQy7d4D299PGYgjh69i2xfHPPDGOEufXGiWKwKLUEYb2/pRcgPLxXyKs6YwVxqRvBrSWt1kPpKybl8iSed1QRBZdk6sTYe9pOk6CRluQgrnWkNR8CyM3/t3q0p+E9uetbnL1r4WEu6nRucXNeEEob5VFuof72ny5As02z7gUOCVKgVkRKu57ZhdcWizS2LmUAL1prJBl75LiQNMpXEX3RXtFOaHV+bvBkrjXqbKpJKYHkPSXj7LtNQm+7nHSZ4jEA8P0OJ1W03TOYIJsSsJpJ4+PTBa/jlKTMUMuX+5SO2Oh5rdD05k= simaoareal@Mini-de-Simao.lan"
          } -> null
      • flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04-lts.flist" -> null

      • ip = "10.32.2.2" -> null

      • memory = 768 -> null

      • name = "cloudxAU0174" -> null

      • planetary = true -> null

      • publicip = false -> null

      • publicip6 = false -> null

      • rootfs_size = 0 -> null

      • ygg_ip = "300:7f14:f56:3a66:4c51:3632:2002:bb6c" -> null

      • zlogs = [] -> null

      • mounts {

        • disk_name = "data" -> null
        • mount_point = "/data" -> null
          }
          }
          }

grid_deployment._2519 will be destroyed

  • resource "grid_deployment" "_2519" {
    • id = "39689" -> null

    • ip_range = "10.32.2.0/24" -> null

    • name = "vm" -> null

    • network_name = "netcloudAU_2519" -> null

    • node = 2519 -> null

    • solution_provider = 0 -> null

    • solution_type = "Virtual Machine" -> null

    • disks {

      • name = "data" -> null
      • size = 8 -> null
        }
    • vms {

      • corex = false -> null

      • cpu = 1 -> null

      • description = "VM2519" -> null

      • env_vars = {

        • "SSH_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPd+6GfEw3wCIb/QlIQ+HBL6IuJfMc9CS1qg8h+C4qlTyQQyTKl+hsGXYiDd6gP33nG7sTgI9Y/WrjfCzRI+REqvjp3o/+VUE6K2MjOb1z+Zsl0w2/iBSuCoe+jieIWPzQQUbyCHdBPo21feVm5vWuBzVMXbuuSeI2ZZqCmUK75D85pxg60pEdEOvEKdH96Tgnb5d3jCU0YvrUcQy7d4D299PGYgjh69i2xfHPPDGOEufXGiWKwKLUEYb2/pRcgPLxXyKs6YwVxqRvBrSWt1kPpKybl8iSed1QRBZdk6sTYe9pOk6CRluQgrnWkNR8CyM3/t3q0p+E9uetbnL1r4WEu6nRucXNeEEob5VFuof72ny5As02z7gUOCVKgVkRKu57ZhdcWizS2LmUAL1prJBl75LiQNMpXEX3RXtFOaHV+bvBkrjXqbKpJKYHkPSXj7LtNQm+7nHSZ4jEA8P0OJ1W03TOYIJsSsJpJ4+PTBa/jlKTMUMuX+5SO2Oh5rdD05k= simaoareal@Mini-de-Simao.lan"
          } -> null
      • flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04-lts.flist" -> null

      • ip = "10.32.2.2" -> null

      • memory = 768 -> null

      • name = "cloudxAU2519" -> null

      • planetary = true -> null

      • publicip = false -> null

      • publicip6 = false -> null

      • rootfs_size = 0 -> null

      • ygg_ip = "300:a760:a994:f5c0:a801:a74c:1974:8d6e" -> null

      • zlogs = [] -> null

      • mounts {

        • disk_name = "data" -> null
        • mount_point = "/data" -> null
          }
          }
          }

grid_deployment._2719 will be destroyed

  • resource "grid_deployment" "_2719" {
    • id = "39678" -> null

    • ip_range = "10.32.2.0/24" -> null

    • name = "vm" -> null

    • network_name = "netcloudAU_2719" -> null

    • node = 2719 -> null

    • solution_provider = 0 -> null

    • solution_type = "Virtual Machine" -> null

    • disks {

      • name = "data" -> null
      • size = 8 -> null
        }
    • vms {

      • corex = false -> null

      • cpu = 1 -> null

      • description = "VM2719" -> null

      • env_vars = {

        • "SSH_KEY" = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPd+6GfEw3wCIb/QlIQ+HBL6IuJfMc9CS1qg8h+C4qlTyQQyTKl+hsGXYiDd6gP33nG7sTgI9Y/WrjfCzRI+REqvjp3o/+VUE6K2MjOb1z+Zsl0w2/iBSuCoe+jieIWPzQQUbyCHdBPo21feVm5vWuBzVMXbuuSeI2ZZqCmUK75D85pxg60pEdEOvEKdH96Tgnb5d3jCU0YvrUcQy7d4D299PGYgjh69i2xfHPPDGOEufXGiWKwKLUEYb2/pRcgPLxXyKs6YwVxqRvBrSWt1kPpKybl8iSed1QRBZdk6sTYe9pOk6CRluQgrnWkNR8CyM3/t3q0p+E9uetbnL1r4WEu6nRucXNeEEob5VFuof72ny5As02z7gUOCVKgVkRKu57ZhdcWizS2LmUAL1prJBl75LiQNMpXEX3RXtFOaHV+bvBkrjXqbKpJKYHkPSXj7LtNQm+7nHSZ4jEA8P0OJ1W03TOYIJsSsJpJ4+PTBa/jlKTMUMuX+5SO2Oh5rdD05k= simaoareal@Mini-de-Simao.lan"
          } -> null
      • flist = "https://hub.grid.tf/tf-official-vms/ubuntu-22.04-lts.flist" -> null

      • ip = "10.32.2.2" -> null

      • memory = 768 -> null

      • name = "cloudxAU2719" -> null

      • planetary = true -> null

      • publicip = false -> null

      • publicip6 = false -> null

      • rootfs_size = 0 -> null

      • ygg_ip = "302:398a:cf61:e7d6:b8a0:e377:37fd:7080" -> null

      • zlogs = [] -> null

      • mounts {

        • disk_name = "data" -> null
        • mount_point = "/data" -> null
          }
          }
          }

grid_network.netcloudAU_0174 will be destroyed

  • resource "grid_network" "netcloudAU_0174" {
    • add_wg_access = false -> null
    • description = "SomeNetwork" -> null
    • external_sk = "QNkfILaFhtfshRXgNVsPAc9IfII4ErLOTU2YpyKRKE0=" -> null
    • id = "acd401f6-4727-4e80-be79-0b05d106ea29" -> null
    • ip_range = "10.32.0.0/16" -> null
    • name = "netcloudAU_0174" -> null
    • node_deployment_id = {
      • "174" = 39656
        } -> null
    • nodes = [
      • 174,
        ] -> null
    • nodes_ip_range = {
      • "174" = "10.32.2.0/24"
        } -> null
    • public_node_id = 0 -> null
    • solution_type = "Network" -> null
      }

grid_network.netcloudAU_2519 will be destroyed

  • resource "grid_network" "netcloudAU_2519" {
    • add_wg_access = false -> null
    • description = "SomeNetwork" -> null
    • external_sk = "6KDcc/rt68G9LMZAW4akr3o+q/rHx3l/BeUpwx83UHs=" -> null
    • id = "3adc5a62-296d-4b08-a80d-fe59a2673aec" -> null
    • ip_range = "10.32.0.0/16" -> null
    • name = "netcloudAU_2519" -> null
    • node_deployment_id = {
      • "2519" = 39686
        } -> null
    • nodes = [
      • 2519,
        ] -> null
    • nodes_ip_range = {
      • "2519" = "10.32.2.0/24"
        } -> null
    • public_node_id = 0 -> null
    • solution_type = "Network" -> null
      }

grid_network.netcloudAU_2719 will be destroyed

  • resource "grid_network" "netcloudAU_2719" {
    • add_wg_access = false -> null
    • description = "SomeNetwork" -> null
    • external_sk = "uM/yLTRmIGFqOdC4cF3LUkJRJKv4W5obZZBItzfF0kY=" -> null
    • id = "c0326f67-ec1f-4cf1-afc6-7ab9e94abec4" -> null
    • ip_range = "10.32.0.0/16" -> null
    • name = "netcloudAU_2719" -> null
    • node_deployment_id = {
      • "2719" = 39655
        } -> null
    • nodes = [
      • 2719,
        ] -> null
    • nodes_ip_range = {
      • "2719" = "10.32.2.0/24"
        } -> null
    • public_node_id = 0 -> null
    • solution_type = "Network" -> null
      }

Plan: 0 to add, 0 to change, 6 to destroy.

Changes to Outputs:

  • ygg_ip_0174 = "300:7f14:f56:3a66:4c51:3632:2002:bb6c" -> null
  • ygg_ip_2519 = "300:a760:a994:f5c0:a801:a74c:1974:8d6e" -> null
  • ygg_ip_2719 = "302:398a:cf61:e7d6:b8a0:e377:37fd:7080" -> null
    grid_deployment._2519: Destroying... [id=39689]
    grid_deployment._0174: Destroying... [id=39684]
    grid_deployment._2719: Destroying... [id=39678]
    grid_deployment._2519: Still destroying... [id=39689, 10s elapsed]
    grid_deployment._0174: Still destroying... [id=39684, 10s elapsed]
    grid_deployment._2519: Still destroying... [id=39689, 20s elapsed]

    │ Error: failed to deploy deployments: failed to delete deployment: failed to cancel contract: error extracting events from block(0x275ba39ff1a72522ea2e4553cd6ac38aef3e440f891074605ba38ad6f555f0b1): failed to decode event: unable to find field Balances_Locked for event add Namespace and IPs in terraform state #22 with EventID [20 17]; failed to fetch deployment objects to revert deployments: 1 error occurred:
    │ * failed to get deployment 39689 of node 2519: context deadline exceeded

    │ ; try again




    │ Error: failed to delete deployment: failed to cancel contract: error extracting events from block(0x63f7205bd579e08a3a37f54db65abdb4c6d8297ee671efa5279d9cda9c6b54be): failed to decode event: unable to find field Balances_Locked for event Add support for hidden nodes #32 with EventID [20 17]




    │ Error: failed to deploy deployments: failed to delete deployment: failed to cancel contract: error extracting events from block(0xb36895264fd92ffa0111cf0418a70bafed9bda173228ea06f87eba9560b05f69): failed to decode event: unable to find field Balances_Locked for event k8s support Multi-Master #37 with EventID [20 17]; failed to fetch deployment objects to revert deployments: 1 error occurred:
    │ * failed to get deployment 39684 of node 174: context deadline exceeded

    │ ; try again



    ➜ au terraform output

    │ Warning: No outputs found

    │ The state file either has no outputs defined, or all the defined outputs are empty. Please define an output in your configuration with the output keyword
    │ and run terraform refresh for it to become available. If you are using interpolation, please verify the interpolated value is not empty. You can use the
    terraform console command to assist.

    ➜ au terraform refresh -var-file=/Users/ZZZZZZZZ/terra/ env/env.tfvars
    grid_network.netcloudAU_2519: Refreshing state... [id=3adc5a62-296d-4b08-a80d-fe59a2673aec]
    grid_network.netcloudAU_0174: Refreshing state... [id=acd401f6-4727-4e80-be79-0b05d106ea29]
    grid_network.netcloudAU_2719: Refreshing state... [id=c0326f67-ec1f-4cf1-afc6-7ab9e94abec4]
    grid_deployment._2719: Refreshing state... [id=39678]
    grid_deployment._0174: Refreshing state... [id=39684]
    grid_deployment._2519: Refreshing state... [id=39689]

    │ Warning: Error reading data from remote, terraform state might be out of sync with the remote state

    │ with grid_network.netcloudAU_0174,
    │ on main.tf line 30, in resource "grid_network" "netcloudAU_0174":
    │ 30: resource "grid_network" "netcloudAU_0174" {

    │ failed to get deployment objects: 1 error occurred:
    │ * failed to get deployment 39656 of node 174: context deadline exceeded



    │ (and one more similar warning elsewhere)

    ➜ au terraform output

    │ Warning: No outputs found

    │ The state file either has no outputs defined, or all the defined outputs are empty. Please define an output in your configuration with the output keyword
    │ and run terraform refresh for it to become available. If you are using interpolation, please verify the interpolated value is not empty. You can use the
    terraform console command to assist.

@xmonader
Copy link
Contributor

xmonader commented Oct 19, 2023

related

our fixes around this will be by providing tools to notify the user, but nothing should be changed in terraform from ourside.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type_bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants