Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CloudFront distribution always shows to be updated when origin_shield is added #24323

Open
sivanovhm opened this issue Apr 20, 2022 · 10 comments
Labels
bug Addresses a defect in current functionality. service/cloudfront Issues and PRs that pertain to the cloudfront service.

Comments

@sivanovhm
Copy link

sivanovhm commented Apr 20, 2022

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v1.1.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v4.10.0
+ provider registry.terraform.io/hashicorp/external v2.2.2
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.1.2
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/integrations/github v4.11.0
+ provider registry.terraform.io/pagerduty/pagerduty v2.4.0

Affected Resource(s)

  • aws_cloudfront_distribution

Terraform Configuration Files

Full aws_cloudfront_distribution configuration

Expected Behavior

performing a terraform plan after origin_shield is already added via terraform should not mark aws_cloudfront_distribution for in-place-update. Vice-versa for setting it to false.

Actual Behavior

Doing a terraform plan after origin_shield is already added via terraform shows aws_cloudfront_distribution to be updated in-place. Vice-versa if it is set to false it still shows it is going to set it to false.

      + origin {
          + connection_attempts = 1
          + connection_timeout  = 10
          + domain_name         = "some-alb-domain.com"
          + origin_id           = "alb-origin"

          + custom_origin_config {
              + http_port                = 80
              + https_port               = 443
              + origin_keepalive_timeout = 5
              + origin_protocol_policy   = "https-only"
              + origin_read_timeout      = 60
              + origin_ssl_protocols     = [
                  + "TLSv1.2",
                ]
            }

          + origin_shield {
              + enabled              = false
              + origin_shield_region = "eu-west-1"
            }
        }
      - origin {
          - connection_attempts = 1 -> null
          - connection_timeout  = 10 -> null
          - domain_name         = "some-alb-domain.com" -> null
          - origin_id           = "alb-origin" -> null

          - custom_origin_config {
              - http_port                = 80 -> null
              - https_port               = 443 -> null
              - origin_keepalive_timeout = 5 -> null
              - origin_protocol_policy   = "https-only" -> null
              - origin_read_timeout      = 60 -> null
              - origin_ssl_protocols     = [
                  - "TLSv1.2",
                ] -> null
            }
        }

Moreover, when we tried to workaround this issue with a dynamic block:

 dynamic "origin_shield" {
      for_each = var.enable_cloudfront_origin_shield ? [1] : []
      content {
        enabled              = true
        origin_shield_region = data.aws_region.current.name
      }
    }

We observed the following:

  • Repeated in-place updates stop
  • However, once origin_shield is set to true and then var.enable_cloudfront_origin_shield = false, terraform marks that it is removing origin_shield but in fact nothing happens and origin_shield still stays applied (when checked in AWS Console).
      - origin {
          - connection_attempts = 1 -> null
          - connection_timeout  = 10 -> null
          - domain_name         = "some-alb-domain.com" -> null
          - origin_id           = "alb-origin" -> null

          - custom_origin_config {
              - http_port                = 80 -> null
              - https_port               = 443 -> null
              - origin_keepalive_timeout = 5 -> null
              - origin_protocol_policy   = "https-only" -> null
              - origin_read_timeout      = 60 -> null
              - origin_ssl_protocols     = [
                  - "TLSv1.2",
                ] -> null
            }

          - origin_shield {
              - enabled              = true -> null
              - origin_shield_region = "eu-central-1" -> null
            }
        }
      + origin {
          + connection_attempts = 1
          + connection_timeout  = 10
          + domain_name         = "some-alb-domain.com
          + origin_id           = "alb-origin"

          + custom_origin_config {
              + http_port                = 80
              + https_port               = 443
              + origin_keepalive_timeout = 5
              + origin_protocol_policy   = "https-only"
              + origin_read_timeout      = 60
              + origin_ssl_protocols     = [
                  + "TLSv1.2",
                ]
            }
        }

Steps to Reproduce

  1. Add
    origin_shield {
      enabled              = var.enable_cloudfront_origin_shield
      origin_shield_region = data.aws_region.current.name
    }

to your aws_cloudfront_distribution resource for your ALB origin where enable_cloudfront_origin_shield is a boolean variable.
2. terraform plan
3. terraform apply
4. terraform plan

References

@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/cloudfront Issues and PRs that pertain to the cloudfront service. labels Apr 20, 2022
@justinretzolk
Copy link
Member

Hey @sivanovhm 👋 Thank you for taking the time to raise this. I'm mostly acting in triaging this issue, but I noticed one callout in the aws_cloudfront_distribution documentation:

CloudFront distributions take about 15 minutes to reach a deployed state after creation or modification.

This note was more talking about deletion after creation/modification, but I'm wondering if you may be hitting some eventual consistency issues here. If you wait 15 minutes or so after running an apply, does the same issue persist?

@justinretzolk justinretzolk added bug Addresses a defect in current functionality. waiting-response Maintainers are waiting on response from community or contributor. and removed needs-triage Waiting for first response or review from a maintainer. labels Apr 20, 2022
@sivanovhm
Copy link
Author

sivanovhm commented Apr 21, 2022

Hey @justinretzolk, I confirm that even after 15 minutes (waited 60 minutes, just in case), origin shield still shows as enabled on the distribution.

Please note that this is only when a dynamic block is used.

If used with "normally", and enabled = false there is not problem with disabling origin shield for the distribution.
The main goal of this issue is to stop the repeated in-place updates that terraform shows on plan.

I believe that these are most likely 2 separate problems, which a caused by the same thing.

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Apr 21, 2022
@tom10271
Copy link

tom10271 commented Feb 8, 2023

For our case Terraform always resolve default_ttl and max_ttl need to update

# aws_cloudfront_distribution.image-handler-cdn will be updated in-place
  ~ resource "aws_cloudfront_distribution" "image-handler-cdn" {
        id                             = "E3TE5L31UZX1IO"
        tags                           = {}
        # (18 unchanged attributes hidden)

      ~ ordered_cache_behavior {
          ~ default_ttl            = 0 -> 86400
          ~ max_ttl                = 0 -> 31536000
            # (11 unchanged attributes hidden)

            # (1 unchanged block hidden)
        }

        # (4 unchanged blocks hidden)
    }

@Rorkal
Copy link

Rorkal commented Aug 3, 2023

I had @tom10271 same issue.

In my case it was my ordered_cache_behavior wich was misconfigured:

  • I had cache_policy_id pointing to Managed-CachingDisabled
  • And default_ttl and max_ttl with values different than 0

If using Managed-CachingDisabled, just set default_ttl and max_ttl to 0.

@tom10271
Copy link

tom10271 commented Aug 3, 2023

My finding is if you are using cache policy, you don't need to specific the ttls at all, just delete them

@jakubjakubeuvic
Copy link

Any update on that?

@kwn
Copy link

kwn commented Jul 22, 2024

My finding is if you are using cache policy, you don't need to specific the ttls at all, just delete them

Unless you prefer to control it... The default TTL for Managed-CachingOptimized is 1 day, which might be too short in some cases.

@tom10271
Copy link

My finding is if you are using cache policy, you don't need to specific the ttls at all, just delete them

Unless you prefer to control it... The default TTL for Managed-CachingOptimized is 1 day, which might be too short in some cases.

No genius, the point is if you want to set the TTL, you should set it in Cache policy but not in CloudFront Distribution.

@kwn
Copy link

kwn commented Jul 22, 2024

No genius, the point is if you want to set the TTL, you should set it in Cache policy but not in CloudFront Distribution.

Why would I create and maintain my own policy if I can just override default values of the AWS managed one? Less resources to maintain, less references to pass between modules, less complexity is definitely worth it.

@tom10271
Copy link

tom10271 commented Jul 22, 2024

No genius, the point is if you want to set the TTL, you should set it in Cache policy but not in CloudFront Distribution.

Why would I create and maintain my own policy if I can just override default values of the AWS managed one? Less resources to maintain, less references to pass between modules, less complexity is definitely worth it.

The reason is extremely simple, because there is not input field to set TTL at all if you are editing Cache policy for CloudFront Distribution behaviour. This is how AWS works. You would say Terraform allows so which is wrong but AWS simply does not allow user to set TTL in Distribution level but declare the TTL in Cache Policy only. And yes if you are not happy with the default TTL which is 86400 only, you have to create your own Cache policy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. service/cloudfront Issues and PRs that pertain to the cloudfront service.
Projects
None yet
Development

No branches or pull requests

6 participants