Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Killed pod metrics still presenting on the Prometheus Exporter #34105

Open
necipakca opened this issue Jul 16, 2024 · 4 comments
Open

Killed pod metrics still presenting on the Prometheus Exporter #34105

necipakca opened this issue Jul 16, 2024 · 4 comments
Assignees
Labels
bug Something isn't working exporter/prometheus Stale

Comments

@necipakca
Copy link

necipakca commented Jul 16, 2024

Component(s)

exporter/prometheus

What happened?

Description

Even I have set the metric_expiration to 1m. Prometheus exporters still presents the old metrics. Even killed pods couple of hour ago.

Collector version

otel/opentelemetry-collector-contrib:0.102.0

Environment information

Environment

K8s

OpenTelemetry Collector configuration

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: collector-deployment
  namespace: my-ns
spec:
  mode: deployment
  podAnnotations:
    sidecar.istio.io/inject: "false"
    prometheus.io/port: "8889"
  replicas: 2
  resources:
    requests:
      memory: "128Mi"
      cpu: "250m"
    limits:
      memory: "1Gi"
      cpu: "1"
  config:
    receivers:
      otlp:
        protocols:
          grpc: {}
          http: {}
    processors:
      transform/drop:
        trace_statements:
          - context: span
            statements:
              - delete_key(resource.attributes, "process.command_args")
      memory_limiter:
        check_interval: 1s
        limit_percentage: 80
        spike_limit_percentage: 20
      batch: {}
      filter/drop_actuator:
        error_mode: ignore
        traces:
          span:
          - attributes["net.host.port"] == 9001
    connectors:
      spanmetrics:
        events:
          enabled: true
          dimensions:
            - name: exception.type
            - name: exception.message
    exporters:
      debug:
        verbosity: detailed
      otlp/jaeger:
        endpoint: "jaeger-collector.jaeger.svc.cluster.local:4317"
        tls:
          insecure: true
      prometheus:
        endpoint: "0.0.0.0:8889"
        metric_expiration: 80s
        enable_open_metrics: true
        add_metric_suffixes: true
        send_timestamps: true
        resource_to_telemetry_conversion:
          enabled: true
    extensions:
      health_check: {}
    service:
      telemetry:
        logs:
          level: "info"
      extensions: [health_check]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, transform/drop, filter/drop_actuator, batch]
          exporters: [spanmetrics, otlp/jaeger]
        metrics:
          receivers: [spanmetrics]
          processors: [memory_limiter, batch]
          exporters: [prometheus]

Log output

No response

Additional context

No response

@necipakca necipakca added bug Something isn't working needs triage New item requiring triage labels Jul 16, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

Can you use the debug exporter to confirm that you aren't still receiving the metrics in question?

@dashpole dashpole removed the needs triage New item requiring triage label Aug 28, 2024
@dashpole dashpole self-assigned this Aug 28, 2024
@dashpole
Copy link
Contributor

@jmichalek132

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/prometheus Stale
Projects
None yet
Development

No branches or pull requests

2 participants