Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependency prometheus-operator/prometheus-operator to v0.77.2 #216

Merged
merged 1 commit into from
Oct 21, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Oct 21, 2024

This PR contains the following updates:

Package Update Change
prometheus-operator/prometheus-operator patch v0.77.1 -> v0.77.2

Release Notes

prometheus-operator/prometheus-operator (prometheus-operator/prometheus-operator)

v0.77.2

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot enabled auto-merge (squash) October 21, 2024 19:02
Copy link

Terraform Initialization success

Terraform Plan success

Pusher: renovate[bot], Action: pull_request

Show Plan
terraform
sakuracloud_ssh_key_gen.gen_key: Refreshing state... [id=113602070325]
data.sakuracloud_archive.ubuntu_archive: Reading...
sakuracloud_switch.k8s_internal_switch: Refreshing state... [id=113602070324]
sakuracloud_internet.k8s_external_switch: Refreshing state... [id=113602070326]
sakuracloud_disk.k8s_rook_disk[7]: Refreshing state... [id=113602070346]
sakuracloud_disk.k8s_rook_disk[3]: Refreshing state... [id=113602070330]
sakuracloud_disk.k8s_rook_disk[6]: Refreshing state... [id=113602070340]
sakuracloud_disk.k8s_rook_disk[0]: Refreshing state... [id=113602070327]
sakuracloud_disk.k8s_rook_disk[1]: Refreshing state... [id=113602070334]
sakuracloud_disk.k8s_rook_disk[4]: Refreshing state... [id=113602070329]
sakuracloud_disk.k8s_rook_disk[2]: Refreshing state... [id=113602070335]
data.sakuracloud_archive.ubuntu_archive: Read complete after 1s [id=113601947038]
sakuracloud_disk.k8s_rook_disk[5]: Refreshing state... [id=113602070343]
sakuracloud_disk.k8s_control_plane_disk[0]: Refreshing state... [id=113602070339]
sakuracloud_disk.k8s_control_plane_disk[2]: Refreshing state... [id=113602070331]
sakuracloud_disk.k8s_control_plane_disk[1]: Refreshing state... [id=113602070333]
sakuracloud_disk.k8s_worker_node_disk[5]: Refreshing state... [id=113602070332]
sakuracloud_disk.k8s_worker_node_disk[3]: Refreshing state... [id=113602070338]
sakuracloud_disk.k8s_worker_node_disk[1]: Refreshing state... [id=113602070342]
sakuracloud_disk.k8s_worker_node_disk[2]: Refreshing state... [id=113602070345]
sakuracloud_disk.k8s_worker_node_disk[6]: Refreshing state... [id=113602070347]
sakuracloud_disk.k8s_worker_node_disk[0]: Refreshing state... [id=113602070341]
sakuracloud_disk.k8s_worker_node_disk[7]: Refreshing state... [id=113602070349]
sakuracloud_disk.k8s_worker_node_disk[4]: Refreshing state... [id=113602070336]
sakuracloud_vpc_router.k8s_internal_router: Refreshing state... [id=113602070350]
sakuracloud_server.k8s_control_plane[1]: Refreshing state... [id=113602070362]
sakuracloud_server.k8s_control_plane[0]: Refreshing state... [id=113602070363]
sakuracloud_server.k8s_control_plane[2]: Refreshing state... [id=113602070364]
sakuracloud_server.k8s_worker_node[0]: Refreshing state... [id=113602070378]
sakuracloud_server.k8s_worker_node[3]: Refreshing state... [id=113602070398]
sakuracloud_server.k8s_worker_node[6]: Refreshing state... [id=113602070377]
sakuracloud_server.k8s_worker_node[1]: Refreshing state... [id=113602070375]
sakuracloud_server.k8s_worker_node[5]: Refreshing state... [id=113602070376]
sakuracloud_server.k8s_worker_node[7]: Refreshing state... [id=113602070379]
sakuracloud_server.k8s_worker_node[4]: Refreshing state... [id=113602070396]
sakuracloud_server.k8s_worker_node[2]: Refreshing state... [id=113602070374]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
  ~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # sakuracloud_disk.k8s_control_plane_disk[0] must be replaced
-/+ resource "sakuracloud_disk" "k8s_control_plane_disk" {
      ~ id                   = "113602070339" -> (known after apply)
        name                 = "k8s-dev-control-plane-1"
      ~ server_id            = "113602070363" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_control_plane_disk[1] must be replaced
-/+ resource "sakuracloud_disk" "k8s_control_plane_disk" {
      ~ id                   = "113602070333" -> (known after apply)
        name                 = "k8s-dev-control-plane-2"
      ~ server_id            = "113602070362" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_control_plane_disk[2] must be replaced
-/+ resource "sakuracloud_disk" "k8s_control_plane_disk" {
      ~ id                   = "113602070331" -> (known after apply)
        name                 = "k8s-dev-control-plane-3"
      ~ server_id            = "113602070364" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_router_disk[0] will be created
  + resource "sakuracloud_disk" "k8s_router_disk" {
      + connector            = "virtio"
      + encryption_algorithm = "none"
      + id                   = (known after apply)
      + name                 = "k8s-dev-router-1"
      + plan                 = "ssd"
      + server_id            = (known after apply)
      + size                 = 20
      + source_archive_id    = "113601947038"
      + tags                 = [
          + "dev",
          + "k8s",
        ]
      + zone                 = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_disk.k8s_worker_node_disk[0] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070341" -> (known after apply)
        name                 = "k8s-dev-worker-node-1"
      ~ server_id            = "113602070378" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[1] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070342" -> (known after apply)
        name                 = "k8s-dev-worker-node-2"
      ~ server_id            = "113602070375" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[2] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070345" -> (known after apply)
        name                 = "k8s-dev-worker-node-3"
      ~ server_id            = "113602070374" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[3] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070338" -> (known after apply)
        name                 = "k8s-dev-worker-node-4"
      ~ server_id            = "113602070398" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[4] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070336" -> (known after apply)
        name                 = "k8s-dev-worker-node-5"
      ~ server_id            = "113602070396" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[5] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070332" -> (known after apply)
        name                 = "k8s-dev-worker-node-6"
      ~ server_id            = "113602070376" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[6] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070347" -> (known after apply)
        name                 = "k8s-dev-worker-node-7"
      ~ server_id            = "113602070377" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_disk.k8s_worker_node_disk[7] must be replaced
-/+ resource "sakuracloud_disk" "k8s_worker_node_disk" {
      ~ id                   = "113602070349" -> (known after apply)
        name                 = "k8s-dev-worker-node-8"
      ~ server_id            = "113602070379" -> (known after apply)
      ~ source_archive_id    = "113601947141" -> "113601947038" # forces replacement
        tags                 = [
            "dev",
            "k8s",
        ]
      ~ zone                 = "tk1b" -> (known after apply)
        # (7 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_control_plane[0] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_control_plane" {
      ~ disks             = [
          - "113602070339",
        ] -> (known after apply)
        id                = "113602070363"
        name              = "k8s-dev-control-plane-1"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "163.43.216.196" -> "163.43.216.197"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.1" -> "192.168.100.20"
            # (3 unchanged attributes hidden)
        }

        # (2 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_control_plane[1] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_control_plane" {
      ~ disks             = [
          - "113602070333",
        ] -> (known after apply)
        id                = "113602070362"
        name              = "k8s-dev-control-plane-2"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "163.43.216.197" -> "163.43.216.198"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.2" -> "192.168.100.21"
            # (3 unchanged attributes hidden)
        }

        # (2 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_control_plane[2] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_control_plane" {
      ~ disks             = [
          - "113602070331",
        ] -> (known after apply)
        id                = "113602070364"
        name              = "k8s-dev-control-plane-3"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "163.43.216.198" -> "163.43.216.199"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.3" -> "192.168.100.22"
            # (3 unchanged attributes hidden)
        }

        # (2 unchanged blocks hidden)
    }

  # sakuracloud_server.k8s_router[0] will be created
  + resource "sakuracloud_server" "k8s_router" {
      + commitment        = "standard"
      + core              = 1
      + cpu_model         = (known after apply)
      + disks             = (known after apply)
      + dns_servers       = (known after apply)
      + gateway           = (known after apply)
      + hostname          = (known after apply)
      + id                = (known after apply)
      + interface_driver  = "virtio"
      + ip_address        = (known after apply)
      + memory            = 1
      + name              = "k8s-dev-router-1"
      + netmask           = (known after apply)
      + network_address   = (known after apply)
      + private_host_name = (known after apply)
      + tags              = [
          + "dev",
          + "k8s",
        ]
      + zone              = (known after apply)

      + disk_edit_parameter {
          + disable_pw_auth = true
          + gateway         = "163.43.216.193"
          + hostname        = "k8s-dev-router-1"
          + ip_address      = "163.43.216.196"
          + netmask         = 28
          + password        = (sensitive value)
          + ssh_key_ids     = [
              + "113602070325",
            ]
        }

      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = "113602070328"
          + user_ip_address = (known after apply)
        }
      + network_interface {
          + mac_address     = (known after apply)
          + upstream        = "113602070324"
          + user_ip_address = "192.168.100.10"
        }

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

  # sakuracloud_server.k8s_worker_node[0] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070341",
          - "113602070327",
        ] -> (known after apply)
        id                = "113602070378"
        name              = "k8s-dev-worker-node-1"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.101" -> "192.168.100.30"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.101" -> "192.168.100.30"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[1] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070342",
          - "113602070334",
        ] -> (known after apply)
        id                = "113602070375"
        name              = "k8s-dev-worker-node-2"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.102" -> "192.168.100.31"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.102" -> "192.168.100.31"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[2] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070345",
          - "113602070335",
        ] -> (known after apply)
        id                = "113602070374"
        name              = "k8s-dev-worker-node-3"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.103" -> "192.168.100.32"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.103" -> "192.168.100.32"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[3] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070338",
          - "113602070330",
        ] -> (known after apply)
        id                = "113602070398"
        name              = "k8s-dev-worker-node-4"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.104" -> "192.168.100.33"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.104" -> "192.168.100.33"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[4] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070336",
          - "113602070329",
        ] -> (known after apply)
        id                = "113602070396"
        name              = "k8s-dev-worker-node-5"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.105" -> "192.168.100.34"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.105" -> "192.168.100.34"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[5] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070332",
          - "113602070343",
        ] -> (known after apply)
        id                = "113602070376"
        name              = "k8s-dev-worker-node-6"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.106" -> "192.168.100.35"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.106" -> "192.168.100.35"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[6] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070347",
          - "113602070340",
        ] -> (known after apply)
        id                = "113602070377"
        name              = "k8s-dev-worker-node-7"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.107" -> "192.168.100.36"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.107" -> "192.168.100.36"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_server.k8s_worker_node[7] will be updated in-place
  ~ resource "sakuracloud_server" "k8s_worker_node" {
      ~ disks             = [
          - "113602070349",
          - "113602070346",
        ] -> (known after apply)
        id                = "113602070379"
        name              = "k8s-dev-worker-node-8"
        tags              = [
            "dev",
            "k8s",
        ]
        # (18 unchanged attributes hidden)

      ~ disk_edit_parameter {
          ~ ip_address            = "192.168.100.108" -> "192.168.100.37"
          ~ password              = (sensitive value)
            # (9 unchanged attributes hidden)
        }

      ~ network_interface {
          ~ user_ip_address  = "192.168.100.108" -> "192.168.100.37"
            # (3 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

  # sakuracloud_subnet.bgp_subnet will be created
  + resource "sakuracloud_subnet" "bgp_subnet" {
      + id              = (known after apply)
      + internet_id     = "113602070326"
      + ip_addresses    = (known after apply)
      + max_ip_address  = (known after apply)
      + min_ip_address  = (known after apply)
      + netmask         = 28
      + network_address = (known after apply)
      + next_hop        = (known after apply)
      + switch_id       = (known after apply)
      + zone            = (known after apply)

      + timeouts {
          + create = "1h"
          + delete = "1h"
        }
    }

Plan: 14 to add, 11 to change, 11 to destroy.

Changes to Outputs:
  + external_address_range       = (known after apply)
  + k8s_router_ip_address        = [
      + (known after apply),
    ]
  - max_ip_address               = "163.43.216.206" -> null
  - min_ip_address               = "163.43.216.200" -> null
  ~ vip_address                  = "163.43.216.199" -> "163.43.216.200"

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.

@renovate renovate bot merged commit bdc01f5 into main Oct 21, 2024
12 checks passed
@renovate renovate bot deleted the renovate/prometheus-operator-prometheus-operator-0.x branch October 21, 2024 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants