Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set sensitive reveals sensitive values #1464

Open
paymog opened this issue Aug 21, 2024 · 0 comments
Open

Set sensitive reveals sensitive values #1464

paymog opened this issue Aug 21, 2024 · 0 comments

Comments

@paymog
Copy link

paymog commented Aug 21, 2024

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.9.3
Provider version: 2.14
Kubernetes version: 1.28

Affected Resource(s)

  • helm_release

Terraform Configuration Files

I'm using terraform CDK. I have the following code:

    new Release(this, "datadog-agent", {
      chart: "datadog",
      name: "datadog",
      repository: "https://helm.datadoghq.com",
      version: "3.60.0",
      timeout: 600,
      setSensitive: [
        {
          name: "datadog.apiKey",
          value: Token.asString(process.env.DATADOG_API_KEY),
        },
        {
          name: "clusterAgent.confd.postgres\\.yaml",
          value: postgresYaml,
        },
        {
          name: "clusterAgent.confd.openmetrics\\.yaml",
          value: openmetricsYaml,
        },
      ],
      values: [renderedValues],
    });

Debug Output

Scared to do this because I'm seeing sensitive values leak in the output

Panic Output

Steps to Reproduce

  1. create a helm release for the datadog agent
  2. set some sensitive values
  3. modify something that gets merged with the sensitive values
  4. see the sensitive values leak

Expected Behavior

The sensitive values should never leak

Actual Behavior

The sensitive values do leak

Important Factoids

I have the following CDKTF typescript code where postgresYaml is a JS object that I render to a YAML string and set as a sensitive value.

    new Release(this, "datadog-agent", {
      chart: "datadog",
      name: "datadog",
      repository: "https://helm.datadoghq.com",
      version: "3.60.0",
      timeout: 600,
      setSensitive: [
        {
          name: "datadog.apiKey",
          value: Token.asString(process.env.DATADOG_API_KEY),
        },
        {
          name: "clusterAgent.confd.postgres\\.yaml",
          value: postgresYaml,
        },
        {
          name: "clusterAgent.confd.openmetrics\\.yaml",
          value: openmetricsYaml,
        },
      ],
      values: [renderedValues],
    });

The renderedValues is a YAML file that I read in at runtime. Inside the file referenced by renderedValues I have the following block

clusterChecksRunner:
  enabled: true
  image:
    tag: 7.55.0

I find that if I change the clusterChecksRunner.image.tag all of my sensitive values get leaked when doing a plan or apply like so

                                         - clusterAgent        = {
                                             - confd   = {
                                                 - "openmetrics.yaml" = <<-EOT
goldsky-infra-dev  cluster_check: true
                                                       instances:
                                                         - prometheus_url: >-
                                                             <stuff>
                                                           min_collection_interval: 30
                                                           timeout: 30
                                                           namespace: redpanda
                                                           tags:
                                                             - env:dev
                                                             - service:redpanda
                                                           send_distribution_buckets: true
                                                           collect_counters_with_distributions: true
                                                           max_returned_metrics: 6000
                                                           metrics:
goldsky-infra-dev  - redpanda_cluster_brokers: cluster.brokers
                                                             - redpanda_cluster_partitions: cluster.partitions
                                                             - redpanda_cluster_topics: cluster.topics
                                                             - redpanda_rpc_active_connections: cluster.rpc.active_connections
                                                             - redpanda_rpc_request_errors_total: cluster.rpc.request.errors
                                                             - redpanda_rpc_request_latency_seconds: cluster.rpc.request.latency
                                                             - redpanda_cluster_unavailable_partitions: cluster.unavailable_partitions
                                                             - redpanda_storage_disk_free_bytes: broker.disk.free
                                                             - redpanda_storage_disk_total_bytes: broker.disk.total
                                                             - redpanda_kafka_request_latency_seconds: broker.request.latency
goldsky-infra-dev  - redpanda_cpu_busy_seconds_total: broker.cpu_seconds.total
                                                             - redpanda_memory_allocated_memory: broker.memory.allocated_memory
                                                             - redpanda_memory_available_memory: broker.memory.available_memory
                                                             - redpanda_memory_free_memory: broker.memory.free_memory
                                                             - redpanda_kafka_request_bytes_total: topic.request.bytes
                                                             - redpanda_schema_registry_request_errors_total: schema_registry.request.errors
                                                             - redpanda_schema_registry_request_latency_seconds: schema_registry.request.latency
                                                   EOT
                                                 - "postgres.yaml"    = <<-EOT
                                                       cluster_check: true
                                                       init_config:
                                                         propagate_agent_tags: true
                                                       instances:
goldsky-infra-dev  - dbm: true
                                                           host: <stuff>
                                                           port: 5432
                                                           username: datadog
                                                           password: <stuff>
                                                           database_autodiscovery:
                                                             enabled: true
                                                       .............

Note that both the openMetrics.yaml and postgres.yaml are meant to be sensitive yet they're being leaked.

I suspect changing any other value in the file referenced by renderedValues will also result in this leak. It seems that during the plan phase terraform is fetching the full values currently used in the k8s cluster and showing the diff, which is obviously not ideal when using the notion of set_sensitive.

This also happens when using terraform directly with terraform plan and not using the cdktf cli.

References

#1376

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants