Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: volumes and tags - not as expected #34816

Open
marafa-sugarcrm opened this issue Dec 8, 2023 · 3 comments
Open

[Bug]: volumes and tags - not as expected #34816

marafa-sugarcrm opened this issue Dec 8, 2023 · 3 comments
Labels
bug Addresses a defect in current functionality. service/ebs Issues and PRs that pertain to the ebs service. service/ec2 Issues and PRs that pertain to the ec2 service. tags Pertains to resource tagging.

Comments

@marafa-sugarcrm
Copy link

marafa-sugarcrm commented Dec 8, 2023

Terraform Core Version

v1.5.7

AWS Provider Version

v5.29.0

Affected Resource(s)

  • aws_ebs_volume"
  • aws_instance

Expected Behavior

just works ™️

  • when tagging volumes that it will be straightforward

Actual Behavior

a myriad of unexpected behaviours ranging from

  • volumes not getting tagged
  • apply needs to be run twice
  • unexpected tags are applied
  • need to use aws_instance.volume_tags OR aws_instance.root_block_device.tags to trigger tagging of aws_ebs_volumes

Relevant Error/Panic Output Snippet

No response

Terraform Configuration Files

resource "aws_iam_instance_profile" "cluster-cluster" {
  name = "cluster-${var.region_env}"
  role = aws_iam_role.cluster-cluster.name
}

resource "aws_iam_role" "cluster-cluster" {
  name = "cluster-${var.region_env}"

  assume_role_policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "sts:AssumeRole",
            "Principal": {
               "Service": "ec2.amazonaws.com"
            },
            "Effect": "Allow",
            "Sid": ""
        }
    ]
}
EOF

  tags = {
    app = "testing"
  }
}

resource "aws_iam_role_policy" "cluster-cluster" {
  name = "cluster-${var.region_env}"
  role = aws_iam_role.cluster-cluster.id

  policy = <<POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:AbortMultipartUpload",
                "s3:PutObjectTagging",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::project-cluster-backup-${var.k8s_cluster_name}/*",
                "arn:aws:s3:::project-cluster-backup-${var.k8s_cluster_name}",
                "arn:aws:s3:::project-worker-data-load-temp-${var.k8s_cluster_name}/*",
                "arn:aws:s3:::project-worker-data-load-temp-${var.k8s_cluster_name}"
            ]
        }
    ]
}
POLICY

}

resource "aws_instance" "cluster-mdw" {
  ami           = var.gp_master_ami_id
  instance_type = var.gp_mdw_instance_type
  subnet_id     = element(module.services_vpc.private_subnets, 0)
  root_block_device {
    encrypted = true
    tags = {
      Name = "cluster-mdw"
      app  = "testing"
    }
  }
  ebs_optimized          = true
  vpc_security_group_ids = [aws_security_group.cluster-master-sg.id]
  iam_instance_profile   = aws_iam_instance_profile.cluster-cluster.name

  tags = {
    Name = "cluster-mdw-${var.region_env}"
    app  = "testing"
  }
  # volume_tags = {
  #   Name = "cluster-mdw-${var.region_env}"
  #   app  = "testing"
  # }
}

resource "aws_instance" "cluster-smdw" {
  ami           = var.gp_master_ami_id
  instance_type = var.gp_smdw_instance_type
  subnet_id     = element(module.services_vpc.private_subnets, 0)
  root_block_device {
    encrypted = true
    tags = {
      Name = "cluster-smdw"
      app  = "testing"
    }
  }
  ebs_optimized          = true
  vpc_security_group_ids = [aws_security_group.cluster-master-sg.id]
  iam_instance_profile   = aws_iam_instance_profile.cluster-cluster.name

  tags = {
    Name = "cluster-smdw-${var.region_env}"
    app  = "testing"
  }

  # volume_tags = {
  #   Name = "cluster-smdw-${var.region_env}"
  #   app  = "testing"
  # }
}

resource "aws_instance" "cluster-sdw1" {
  ami           = var.gp_segment_ami_id
  instance_type = var.gp_segment_instance_type
  subnet_id     = element(module.services_vpc.private_subnets, 0)
  root_block_device {
    encrypted = true
    tags = {
      Name = "cluster-sdw1"
      app  = "testing"
    }
  }
  ebs_optimized          = true
  vpc_security_group_ids = [aws_security_group.cluster-worker-sg.id]
  iam_instance_profile   = aws_iam_instance_profile.cluster-cluster.name

  tags = {
    Name = "cluster-sdw1-${var.region_env}"
    app  = "testing"
  }
  # volume_tags = {
  #   Name = "cluster-sdw1-${var.region_env}"
  #   app  = "testing"
  # }
}

resource "aws_instance" "cluster-sdw2" {
  ami           = var.gp_segment_ami_id
  instance_type = var.gp_segment_instance_type
  subnet_id     = element(module.services_vpc.private_subnets, 0)
  root_block_device {
    encrypted = true
    tags = {
      Name = "cluster-sdw2"
      app  = "testing"
    }
  }
  ebs_optimized          = true
  vpc_security_group_ids = [aws_security_group.cluster-worker-sg.id]
  iam_instance_profile   = aws_iam_instance_profile.cluster-cluster.name

  tags = {
    Name = "cluster-sdw2-${var.region_env}"
    app  = "testing"
  }
  # volume_tags = {
  #   Name = "cluster-sdw2-${var.region_env}"
  #   app  = "testing"
  # }
}

resource "aws_ebs_volume" "cluster-mdw-h" {
  availability_zone = "${var.region}a"
  size              = var.gp_master_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-mdw-h"
    app  = "testing"
  }
}

resource "aws_volume_attachment" "ebs-mdw-h" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.cluster-mdw-h.id
  instance_id = aws_instance.cluster-mdw.id
}

resource "aws_ebs_volume" "cluster-smdw-h" {
  availability_zone = "${var.region}a"
  size              = var.gp_master_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-smdw-h"
    app  = "testing"
  }
}

resource "aws_volume_attachment" "ebs-smdw-h" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.cluster-smdw-h.id
  instance_id = aws_instance.cluster-smdw.id
}

resource "aws_ebs_volume" "cluster-sdw1-h" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw1-h"
    app  = "testing"

  }
}

resource "aws_ebs_volume" "cluster-sdw1-i" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw1-i"
    app  = "testing"
  }
}

resource "aws_ebs_volume" "cluster-sdw1-j" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw1-j"
    app  = "testing"
  }
}

resource "aws_ebs_volume" "cluster-sdw1-k" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw1-k"
    app  = "testing"
  }
}

resource "aws_volume_attachment" "ebs-sdw1-h" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.cluster-sdw1-h.id
  instance_id = aws_instance.cluster-sdw1.id
}

resource "aws_volume_attachment" "ebs-sdw1-i" {
  device_name = "/dev/sdi"
  volume_id   = aws_ebs_volume.cluster-sdw1-i.id
  instance_id = aws_instance.cluster-sdw1.id
}

resource "aws_volume_attachment" "ebs-sdw1-j" {
  device_name = "/dev/sdj"
  volume_id   = aws_ebs_volume.cluster-sdw1-j.id
  instance_id = aws_instance.cluster-sdw1.id
}

resource "aws_volume_attachment" "ebs-sdw1-k" {
  device_name = "/dev/sdk"
  volume_id   = aws_ebs_volume.cluster-sdw1-k.id
  instance_id = aws_instance.cluster-sdw1.id
}

resource "aws_ebs_volume" "cluster-sdw2-h" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw2-h"
    app  = "testing"
  }
}

resource "aws_ebs_volume" "cluster-sdw2-i" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw2-i"
    app  = "testing"
  }
}

resource "aws_ebs_volume" "cluster-sdw2-j" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw2-j"
    app  = "testing"
  }
}

resource "aws_ebs_volume" "cluster-sdw2-k" {
  availability_zone = "${var.region}a"
  size              = var.gp_segment_ebs_size
  encrypted         = true

  tags = {
    Name = "cluster-sdw2-k"
    app  = "testing"
  }
}

resource "aws_volume_attachment" "ebs-sdw2-h" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.cluster-sdw2-h.id
  instance_id = aws_instance.cluster-sdw2.id
}

resource "aws_volume_attachment" "ebs-sdw2-i" {
  device_name = "/dev/sdi"
  volume_id   = aws_ebs_volume.cluster-sdw2-i.id
  instance_id = aws_instance.cluster-sdw2.id
}

resource "aws_volume_attachment" "ebs-sdw2-j" {
  device_name = "/dev/sdj"
  volume_id   = aws_ebs_volume.cluster-sdw2-j.id
  instance_id = aws_instance.cluster-sdw2.id
}

resource "aws_volume_attachment" "ebs-sdw2-k" {
  device_name = "/dev/sdk"
  volume_id   = aws_ebs_volume.cluster-sdw2-k.id
  instance_id = aws_instance.cluster-sdw2.id
}

resource "aws_security_group" "cluster-master-sg" {
  name        = "cluster-master-sg-${var.region_env}"
  description = "cluster masters"
  vpc_id      = module.services_vpc.vpc_id

  tags = {
    Name = "cluster-master-sg"
    app  = "testing"
  }
}

resource "aws_security_group" "cluster-worker-sg" {
  name        = "cluster-workers-sg-${var.region_env}"
  description = "cluster workers"
  vpc_id      = module.services_vpc.vpc_id

  tags = {
    Name = "cluster-worker-sg"
    app  = "testing"
  }
}

resource "aws_security_group_rule" "master-egress" {
  type              = "egress"
  to_port           = 0
  from_port         = 0
  protocol          = "-1"
  cidr_blocks       = ["0.0.0.0/0"]
  security_group_id = aws_security_group.cluster-master-sg.id
}

resource "aws_security_group_rule" "worker-egress" {
  type              = "egress"
  to_port           = 0
  from_port         = 0
  protocol          = "-1"
  cidr_blocks       = ["0.0.0.0/0"]
  security_group_id = aws_security_group.cluster-worker-sg.id
}

resource "aws_security_group_rule" "k8s-to-worker" {
  type              = "ingress"
  from_port         = 0
  to_port           = 65535
  protocol          = "tcp"
  cidr_blocks       = [var.k8s_cluster_cidr]
  security_group_id = aws_security_group.cluster-worker-sg.id
}

resource "aws_security_group_rule" "k8s-to-master" {
  type              = "ingress"
  from_port         = 0
  to_port           = 65535
  protocol          = "tcp"
  cidr_blocks       = [var.k8s_cluster_cidr]
  security_group_id = aws_security_group.cluster-master-sg.id
}

resource "aws_security_group_rule" "master-to-worker" {
  type                     = "ingress"
  from_port                = 0
  to_port                  = 65535
  protocol                 = "-1"
  source_security_group_id = aws_security_group.cluster-master-sg.id
  security_group_id        = aws_security_group.cluster-worker-sg.id
}

resource "aws_security_group_rule" "worker-to-master" {
  type                     = "ingress"
  from_port                = 0
  to_port                  = 65535
  protocol                 = "-1"
  source_security_group_id = aws_security_group.cluster-worker-sg.id
  security_group_id        = aws_security_group.cluster-master-sg.id
}

resource "aws_security_group_rule" "master-to-master" {
  type              = "ingress"
  from_port         = 0
  to_port           = 65535
  protocol          = "-1"
  security_group_id = aws_security_group.cluster-master-sg.id
  self              = true
}

resource "aws_security_group_rule" "worker-to-worker" {
  type              = "ingress"
  from_port         = 0
  to_port           = 65535
  protocol          = "-1"
  security_group_id = aws_security_group.cluster-worker-sg.id
  self              = true
}

output "cluster-mdw" {
  value = aws_instance.cluster-mdw.private_ip
}

output "cluster-smdw" {
  value = aws_instance.cluster-smdw.private_ip
}

output "cluster-sdw1" {
  value = aws_instance.cluster-sdw1.private_ip
}

output "cluster-sdw2" {
  value = aws_instance.cluster-sdw2.private_ip
}


### Steps to Reproduce

i have a 4 node cluster
- primary (mdw)
- secondary (smdw)
- 2 worker nodes (sdw*)
all nodes have a root volume
- primary and secondary have an additional node
- worker nodes have 5 additional volumes

there were no tags on the volumes.
so i started off with adding tags to the worker nodes' volumes. the plan said there was nothing to change. (see this [note](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#volume_tags) on why i didnt start off with volume_tags) 
i then added volume_tags to the aws_instance . now that took. but the tags on all volumes were the ones meant for the root volumes, which are slightly different
i then modified (and edited and modified) until i got this final version. i had to run apply twice coz on the 2nd last run, i managed to clear out all tags but the root volumes didnt have any tags. that was fixed on the final apply

### Debug Output

_No response_

### Panic Output

_No response_

### Important Factoids

_No response_

### References

_No response_

### Would you like to implement a fix?

None
@marafa-sugarcrm marafa-sugarcrm added the bug Addresses a defect in current functionality. label Dec 8, 2023
@github-actions github-actions bot added the service/ebs Issues and PRs that pertain to the ebs service. label Dec 8, 2023
Copy link

github-actions bot commented Dec 8, 2023

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added service/ec2 Issues and PRs that pertain to the ec2 service. service/ec2ebs Issues and PRs that pertain to the ec2ebs service. service/iam Issues and PRs that pertain to the iam service. service/vpc Issues and PRs that pertain to the vpc service. labels Dec 8, 2023
@terraform-aws-provider terraform-aws-provider bot added the needs-triage Waiting for first response or review from a maintainer. label Dec 8, 2023
@marafa-sugarcrm marafa-sugarcrm changed the title [Bug]: volumes and tags - not as expecte [Bug]: volumes and tags - not as expected Dec 8, 2023
@justinretzolk justinretzolk added tags Pertains to resource tagging. bug Addresses a defect in current functionality. and removed bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/vpc Issues and PRs that pertain to the vpc service. service/ec2ebs Issues and PRs that pertain to the ec2ebs service. needs-triage Waiting for first response or review from a maintainer. labels Dec 12, 2023
@ckinasch
Copy link

ckinasch commented Jun 12, 2024

fetching data from aws_ebs_volume returns an empty map of tags in ~> 5.49.
Reverted version to 5.48 and fixed the problem

@moritz-makandra
Copy link

I could fix this issue for me by not using capital letters as the first character of the tag name

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. service/ebs Issues and PRs that pertain to the ebs service. service/ec2 Issues and PRs that pertain to the ec2 service. tags Pertains to resource tagging.
Projects
None yet
Development

No branches or pull requests

4 participants