Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraformのファイルを分割する #13

Merged
merged 5 commits into from
Aug 4, 2023
Merged

Terraformのファイルを分割する #13

merged 5 commits into from
Aug 4, 2023

Conversation

mu-ruU1
Copy link
Member

@mu-ruU1 mu-ruU1 commented Jul 25, 2023

関連

概要

  • control plane, worker nodeでディレクトリ・ファイルを分離する

Checklist

備考

  • リソースの分割はこれで良いか確認お願いします。

@mu-ruU1 mu-ruU1 requested a review from logica0419 July 25, 2023 18:11
@mu-ruU1 mu-ruU1 linked an issue Jul 25, 2023 that may be closed by this pull request
Copy link
Member

@logica0419 logica0419 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

早い反応有難うございます!助かります~
分ける部分に関しては問題なくできていそうですが、分け方がちょっと美しくないなぁと思ってしまいます。ちょっと細かいかもしれませんが。
最終的に、今後これを読む人がパッとフォルダを見た瞬間一発で理解できることを目標にしたいので、今できる最大限綺麗にしたいと思っています。

フォルダ分けまでできるのであれば、以下のような形が僕の中では理想ですね
役割をフォルダで分け、その中でリソースの種類をファイルで分けるような形です

  • control-plane
    • server.tf
    • disk.tf
  • lb
    • server.tf
    • disk,tf
  • router
    • server.tf
    • disk.tf
  • worker-node
    • server.tf
    • disk.tf
  • network
    • external.tf (external switchとsubnet)
    • internal.tf (internal switch)
  • main.tf
  • output.tf (アウトプットはバラバラのフォルダに分けると逆にわかりづらいと思った)
  • var.tf

あくまで僕の意見なので、将来これをメンテする人の気持ちを考えて、もし「こっちの方がいいんじゃない?」みたいな案があったら遠慮なく言って下さい!

terraform/worker-node/main.tf Outdated Show resolved Hide resolved
terraform/worker-node/main.tf Outdated Show resolved Hide resolved
terraform/worker-node/main.tf Outdated Show resolved Hide resolved
@mu-ruU1
Copy link
Member Author

mu-ruU1 commented Jul 26, 2023

確認お願いします。

@logica0419
Copy link
Member

分割は完璧です!ありがとう!!

あとはチェック通ればマージして大丈夫そう
Ansibleのチェック落ちてるのは僕のせいだと思うのでなんとかします
Terraformの方はなんで落ちてるのかなぁという感じですね

@github-actions
Copy link

Terraform Format and Style success

Terraform Initialization success

Terraform Plan success

Pusher: @mu-ruU1, Action: pull_request

Show Plan
terraform
data.sakuracloud_archive.ubuntu-archive: Reading...
data.sakuracloud_archive.ubuntu-archive: Read complete after 1s [id=113402076881]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # sakuracloud_disk.k8s-lb-disk[0] will be created
  + resource "sakuracloud_disk" "k8s-lb-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-lb-1-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 20
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-lb-disk[1] will be created
  + resource "sakuracloud_disk" "k8s-lb-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-lb-2-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 20
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-master-disk[0] will be created
  + resource "sakuracloud_disk" "k8s-master-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-master-1-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 40
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-master-disk[1] will be created
  + resource "sakuracloud_disk" "k8s-master-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-master-2-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 40
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-master-disk[2] will be created
  + resource "sakuracloud_disk" "k8s-master-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-master-3-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 40
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-node-disk[0] will be created
  + resource "sakuracloud_disk" "k8s-node-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-node-1-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 40
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-node-disk[1] will be created
  + resource "sakuracloud_disk" "k8s-node-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-node-2-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 40
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-node-disk[2] will be created
  + resource "sakuracloud_disk" "k8s-node-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-node-3-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 40
      + source_archive_id = "113402076881"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-rook-disk[0] will be created
  + resource "sakuracloud_disk" "k8s-rook-disk" {
      + connector = "virtio"
      + id        = (known after apply)
      + name      = "k8s-rook-1-dev"
      + plan      = "ssd"
      + server_id = (known after apply)
      + size      = 40
      + tags      = [
          + "dev",
          + "k8s",
        ]
      + zone      = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-rook-disk[1] will be created
  + resource "sakuracloud_disk" "k8s-rook-disk" {
      + connector = "virtio"
      + id        = (known after apply)
      + name      = "k8s-rook-2-dev"
      + plan      = "ssd"
      + server_id = (known after apply)
      + size      = 40
      + tags      = [
          + "dev",
          + "k8s",
        ]
      + zone      = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-rook-disk[2] will be created
  + resource "sakuracloud_disk" "k8s-rook-disk" {
      + connector = "virtio"
      + id        = (known after apply)
      + name      = "k8s-rook-3-dev"
      + plan      = "ssd"
      + server_id = (known after apply)
      + size      = 40
      + tags      = [
          + "dev",
          + "k8s",
        ]
      + zone      = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s-router-disk[0] will be created
  + resource "sakuracloud_disk" "k8s-router-disk" {
      + connector         = "virtio"
      + id                = (known after apply)
      + name              = "k8s-router-1-dev"
      + plan              = "ssd"
      + server_id         = (known after apply)
      + size              = 20
      + source_archive_id = "113500057368"
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)
    }

  # sakuracloud_internet.k8s-external-switch will be created
  + resource "sakuracloud_internet" "k8s-external-switch" {
      + band_width           = 100
      + gateway              = (known after apply)
      + id                   = (known after apply)
      + ip_addresses         = (known after apply)
      + ipv6_network_address = (known after apply)
      + ipv6_prefix          = (known after apply)
      + ipv6_prefix_len      = (known after apply)
      + max_ip_address       = (known after apply)
      + min_ip_address       = (known after apply)
      + name                 = "k8s-external-switch"
      + netmask              = 28
      + network_address      = (known after apply)
      + server_ids           = (known after apply)
      + switch_id            = (known after apply)
      + zone                 = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-lb-server[0] will be created
  + resource "sakuracloud_server" "k8s-lb-server" {
      + commitment        = "standard"
      + core              = 2
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 2
      + name              = "k8s-lb-1-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-lb-1-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.30"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-lb-server[1] will be created
  + resource "sakuracloud_server" "k8s-lb-server" {
      + commitment        = "standard"
      + core              = 2
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 2
      + name              = "k8s-lb-2-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-lb-2-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.31"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-master-server[0] will be created
  + resource "sakuracloud_server" "k8s-master-server" {
      + commitment        = "standard"
      + core              = 4
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 8
      + name              = "k8s-master-1-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-master-1-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.10"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-master-server[1] will be created
  + resource "sakuracloud_server" "k8s-master-server" {
      + commitment        = "standard"
      + core              = 4
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 8
      + name              = "k8s-master-2-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-master-2-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.11"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-master-server[2] will be created
  + resource "sakuracloud_server" "k8s-master-server" {
      + commitment        = "standard"
      + core              = 4
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 8
      + name              = "k8s-master-3-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-master-3-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.12"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-node-server[0] will be created
  + resource "sakuracloud_server" "k8s-node-server" {
      + commitment        = "standard"
      + core              = 4
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 8
      + name              = "k8s-node-1-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-node-1-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.20"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-node-server[1] will be created
  + resource "sakuracloud_server" "k8s-node-server" {
      + commitment        = "standard"
      + core              = 4
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 8
      + name              = "k8s-node-2-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-node-2-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.21"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-node-server[2] will be created
  + resource "sakuracloud_server" "k8s-node-server" {
      + commitment        = "standard"
      + core              = 4
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 8
      + name              = "k8s-node-3-server-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = (known after apply)
          + hostname        = "k8s-node-3-server-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = (known after apply)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = "192.168.100.22"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s-router[0] will be created
  + resource "sakuracloud_server" "k8s-router" {
      + commitment        = "standard"
      + core              = 2
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 2
      + name              = "k8s-router-1-dev"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "@nic-double-queue",
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = false
          + gateway         = (known after apply)
          + hostname        = "k8s-router-1-dev"
          + ip_address      = (known after apply)
          + netmask         = 28
          + password        = (sensitive value)
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = (known after apply)
          + user_ip_address = (known after apply)
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_ssh_key_gen.gen_key will be created
  + resource "sakuracloud_ssh_key_gen" "gen_key" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + name        = "k8s_pub_key"
      + private_key = (known after apply)
      + public_key  = (known after apply)
    }

  # sakuracloud_subnet.bgp-subnet will be created
  + resource "sakuracloud_subnet" "bgp-subnet" {
      + id              = (known after apply)
      + internet_id     = (known after apply)
      + ip_addresses    = (known after apply)
      + max_ip_address  = (known after apply)
      + min_ip_address  = (known after apply)
      + netmask         = 28
      + network_address = (known after apply)
      + next_hop        = (known after apply)
      + switch_id       = (known after apply)
      + zone            = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_switch.k8s-internal-switch will be created
  + resource "sakuracloud_switch" "k8s-internal-switch" {
      + id         = (known after apply)
      + name       = "k8s-internal-switch"
      + server_ids = (known after apply)
      + zone       = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

Plan: 25 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + external_address_range       = (known after apply)
  + k8s_lb_server_ip_address     = [
      + (known after apply),
      + (known after apply),
    ]
  + k8s_master_server_ip_address = [
      + (known after apply),
      + (known after apply),
      + (known after apply),
    ]
  + k8s_node_server_ip_address   = [
      + (known after apply),
      + (known after apply),
      + (known after apply),
    ]
  + k8s_router_ip_address        = [
      + (known after apply),
    ]
  + vip_address                  = (known after apply)

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@mu-ruU1
Copy link
Member Author

mu-ruU1 commented Jul 27, 2023

プレフィックス-リソースの種類 という形のファイルにしました。

@mu-ruU1 mu-ruU1 requested a review from logica0419 July 29, 2023 11:06
Copy link
Member

@logica0419 logica0419 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

レビュー遅れて申し訳ないです…
完璧ですね!ありがとうございます!
リソース命名とどっちを先に入れるか…って感じですが(片方にはほぼ作り直しをお願いしなければいけないので)
Yuu君はどう考えてます?

@mu-ruU1
Copy link
Member Author

mu-ruU1 commented Aug 1, 2023

お忙しい中すいません。

#17 の進捗的には、あとレビューの修正だけでしょうか?
それでしたらプルリクをマージしていただいて、meronpannnさんにTerraformのリソース名を変更して頂いたほうが良いのかなと考えています。

追記
Terraformの方だけリソース命名修正するのはややこしくなるので辞めた方が良いですか?

@logica0419
Copy link
Member

返信遅れてごめんなさい…

Terraformだけ変更は流石にやめときたいです
整合性保てない状態を一瞬でも作りたくない…

Copy link
Member

@logica0419 logica0419 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asahiくんからしばらく反応がないので、Yuuくんのやつマージしちゃって下さい。
改めてありがとう!引き続きよろしくお願いします

@mu-ruU1 mu-ruU1 merged commit b08e97d into main Aug 4, 2023
5 checks passed
@mu-ruU1 mu-ruU1 deleted the fix/issue_#5 branch August 4, 2023 04:41
@mu-ruU1 mu-ruU1 restored the fix/issue_#5 branch August 4, 2023 04:44
@mu-ruU1 mu-ruU1 deleted the fix/issue_#5 branch August 4, 2023 04:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Terraformのファイルを分割する
2 participants