Skip to content

spanmetrics grow indefinitely #5271

Closed

Description

What's wrong?

spanmetrics series appear to continue to grow indefinitely
image
image

I think this might be related to #4614--maybe flushing is only setup if you use flow mode?

Steps to reproduce

Leave the Grafana Agent with spanmetrics enabled running for a few days

System information

No response

Software version

0.36.1

Configuration

traces:
  configs:
  - name: default
    attributes:
      actions:
        - key: traces
          action: upsert
          value: root
    remote_write:
      - endpoint: tempo-prod-09-us-central2.grafana.net:443
        basic_auth:
          username: 123
          password_file: /etc/tempo/tempo-api-token
        sending_queue:
          queue_size: 50000
    receivers:
      otlp:
        protocols:
          grpc:
            keepalive:
              server_parameters:
                max_connection_idle: 2m
                max_connection_age: 10m
          http: null
    service_graphs:
      enabled: true
    spanmetrics:
      handler_endpoint: "0.0.0.0:8889"
      # https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/2fc0da01638047b471765ba7b13910e32d7abdf0/processor/servicegraphprocessor/processor.go#L47
      # default 2, 4, 6, 8, 10, 50, 100, 200, 400, 800, 1000, 1400, 2000, 5000, 10_000, 15_000
      latency_histogram_buckets: [5ms, 15ms, 35ms, 150ms, 250ms, 500ms, 1s, 5s, 30s]
      dimensions_cache_size: 1000
      dimensions:
      - name: http.status_code
      - name: net.peer.name

    scrape_configs:
      - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        job_name: kubernetes-pods
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - action: replace
            source_labels:
              - __meta_kubernetes_namespace
            target_label: namespace
          - action: replace
            source_labels:
              - __meta_kubernetes_pod_name
            target_label: pod
          - action: replace
            source_labels:
              - __meta_kubernetes_pod_container_name
            target_label: container
        tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            insecure_skip_verify: false
    load_balancing:
      receiver_port: 8080
      exporter:
        insecure: true
      resolver:
        dns:
          hostname: grafana-agent-traces-headless
          port: 8080
    # see https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md
    tail_sampling:
      policies:
        - type: probabilistic
          probabilistic:
            sampling_percentage: 10

logs:
  configs:
  - name: default
    positions:
      filename: /tmp/positions.yaml
    clients:
      - url: https://logs-prod-us-central2.grafana.net/loki/api/v1/push
        basic_auth:
          username: 123
          password_file: /etc/loki/loki-api-token
        external_labels:
          cluster: dev
server:
  log_level: info

Logs

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

Labels

bugSomething isn't workingfrozen-due-to-ageLocked due to a period of inactivity. Please open new issues or PRs if more discussion is needed.

Type

No type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions