Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spanmetricsconnector metric has been increasing continuously #29604

Closed
shicli opened this issue Dec 1, 2023 · 4 comments
Closed

spanmetricsconnector metric has been increasing continuously #29604

shicli opened this issue Dec 1, 2023 · 4 comments
Labels
bug Something isn't working connector/spanmetrics needs triage New item requiring triage

Comments

@shicli
Copy link

shicli commented Dec 1, 2023

Component(s)

connector/spanmetrics

What happened?

Description

I am using spanmetrics connector to generate metrics from span.There are some doubts about the duration indicator. The generated duration indicator will always exist, even if the value is 0, it will still be collected in Prometheus, so that the data sent to Prometheus will continue to increase each time. Even after a long time, the old duration indicator still exists. Is this a normal phenomenon ?
If this is normal, then after a long period of time, there will be a lot of RED indicators, which will exert great pressure on both otel and prometheus. So what should be done? Is the RED indicator meant to be permanently stored in memory?

Steps to Reproduce

Using spanmetrics connector config and passing it as exporter in trace pipeline and receiving it in metrics pipeline.

connectors:
spanmetrics:
histogram:
explicit:
buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
dimensions:
- name: http.method
default: GET
- name: http.status_code
exemplars:
enabled: true
exclude_dimensions: ['status.code']
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
metrics_flush_interval: 15s

Expected Result

calls and duration_count value always increase

Collector version

0.81

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
  nop:

exporters:
  prometheusremotewrite:
    endpoint: http://localhost:9090/api/v1/write
    target_info:
      enabled: true

connectors:
  spanmetrics:
    histogram:
      explicit:
        buckets: [100us, 1ms, 2ms, 6ms, 10ms, 100ms, 250ms]
    dimensions:
      - name: http.method
        default: GET
      - name: http.status_code
    exemplars:
      enabled: true
    exclude_dimensions: ['status.code']
    dimensions_cache_size: 1000
    aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"    
    metrics_flush_interval: 60s 

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [spanmetrics]
    metrics:
      receivers: [spanmetrics]
      exporters: [prometheusremotewrite]

Log output

No response

Additional context

No response

@shicli shicli added bug Something isn't working needs triage New item requiring triage labels Dec 1, 2023
Copy link
Contributor

github-actions bot commented Dec 1, 2023

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@sakulali
Copy link
Contributor

sakulali commented Dec 1, 2023

Hello @shicli, it seems similar to connector/spanmetrics?

@crobert-1
Copy link
Member

Thanks for the reference @sakulali, it looks like a duplicate of #27654.

Please let us know if you believe this is a different issue, @shicli!

@shicli
Copy link
Author

shicli commented Jun 24, 2024

@crobert-1 @matej-g This is indeed the problem I encountered. Thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working connector/spanmetrics needs triage New item requiring triage
Projects
None yet
Development

No branches or pull requests

3 participants