Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[connector/spanmetrics] SpanMetrics & ServiceGraph should consider the startTimeStamp from the span data itself and not the timeStamp when the collector receives the span #36613

Open
meSATYA opened this issue Dec 2, 2024 · 4 comments

Comments

@meSATYA
Copy link

meSATYA commented Dec 2, 2024

Component(s)

connector/spanmetrics

What happened?

Description

As the spanmetrics & servicegraph conenctors consider the timestamp of the spans when the collector receives it, the calls_total metric shows incorrect values when the collector pods are restarted. This is because the collector finds that the number of spans it receives it increased because of the more number of spans being sent to the collector when it is restarted. But, this is not the right indication of the increase in number of calls to a particular service.

Steps to Reproduce

Restart the collector pods, and then observe a spike in the calls_total metric even when the calls to the backend service didn't actually rise. The collector is deployed as a statefulset and it receives the spans from another collector using loadbalancer exporter with "service.name" as routing_key.

Expected Result

There shouldn't be any spike in rate(calls_total) metric when collector is restarted.

Actual Result

There is a spike appears in rate(calls_total) metric when collector pods are restarted.

Collector version

0.114.0 or earlier

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

'''
exporters:
  debug:
    verbosity: basic
  loadbalancing/processor-traces-spnsgrh:
    protocol:
      otlp:
        timeout: 30s
        tls:
          insecure: true
    resolver:
      k8s:
        ports:
        - 4317
        service: spnsgrh-traces-otel-collector.processor-traces
    routing_key: service
extensions:
  health_check:
    endpoint: ${env:MY_POD_IP}:13133
processors:
  batch: {}
  memory_limiter:
    check_interval: 5s
    limit_percentage: 80
    spike_limit_percentage: 25
receivers:
  otlp/loadbalancer-traces-spnsgrh:
    protocols:
      http:
        cors:
          allowed_origins:
          - http://*
          - https://*
        endpoint: ${env:MY_POD_IP}:4318
        include_metadata: true
        max_request_body_size: 10485760
service:
  extensions:
  - health_check
  pipelines:
    traces/spnsgrh:
      exporters:
      - loadbalancing/processor-traces-spnsgrh
      processors:
      - batch
      receivers:
      - otlp/loadbalancer-traces-spnsgrh
    
  telemetry:
    metrics:
      address: ${env:MY_POD_IP}:8888
'''


'''
connectors:
  servicegraph:
    latency_histogram_buckets:
    - 100ms
    - 250ms
    - 500ms
    - 1s
    - 5s
    - 10s
    metrics_flush_interval: 30s
    store:
      max_items: 10
      ttl: 2s
  spanmetrics:
    aggregation_temporality: AGGREGATION_TEMPORALITY_CUMULATIVE
    dimensions:
    - name: http.method
    - name: http.status_code
    dimensions_cache_size: 1000
    events:
      dimensions:
      - name: exception.type
      enabled: true
    exclude_dimensions:
    - k8s.pod.uid
    - k8s.pod.name
    - k8s.container.name
    - k8s.deployment.name
    - k8s.deployment.uid
    - k8s.job.name
    - k8s.job.uid
    - k8s.namespace.name
    - k8s.node.name
    - k8s.pod.ip
    - k8s.pod.start_time
    - k8s.replicaset.name
    - k8s.replicaset.uid
    - azure.vm.scaleset.name
    - cloud.resource_id
    - host.id
    - host.type
    - instance
    - service.instance.id
    - host.name
    - job
    - dt.entity.host
    - dt.entity.process_group
    - dt.entity.process_group_instance
    - container.id
    exemplars:
      enabled: true
      max_per_data_point: 5
    histogram:
      explicit:
        buckets:
        - 1ms
        - 10ms
        - 20ms
        - 50ms
        - 100ms
        - 250ms
        - 500ms
        - 800
        - 1s
        - 2s
        - 5s
        - 10s
        - 15s
    metrics_expiration: 5m
    metrics_flush_interval: 1m
    namespace: span.metrics
    resource_metrics_key_attributes:
    - service.name
    - telemetry.sdk.language
    - telemetry.sdk.name
exporters:
  debug/servicegraph:
    verbosity: basic
  debug/spanmetrics:
    verbosity: basic
  otlphttp/vm-default-processor-servicegraph:
    compression: gzip
    encoding: proto
    endpoint: http://spnsgrh-victoria-metrics-cluster-vminsert.metrics.svc.cluster.local:8480/insert/20/opentelemetry
    timeout: 30s
    tls:
      insecure: true
  prometheusremotewrite/vm-default-processor-spanmetrics:
    compression: gzip
    endpoint: http://spnsgrh-victoria-metrics-cluster-vminsert.metrics.svc.cluster.local:8480/insert/10/prometheus
    resource_to_telemetry_conversion:
      enabled: true
    timeout: 60s
    tls:
      insecure_skip_verify: true
extensions:
  health_check:
    endpoint: ${env:MY_POD_IP}:13133
processors:
  batch: {}
  batch/servicegraph:
    send_batch_max_size: 5000
    send_batch_size: 4500
    timeout: 10s
  batch/spanmetrics:
    send_batch_max_size: 5000
    send_batch_size: 4500
    timeout: 10s
  memory_limiter:
    check_interval: 5s
    limit_percentage: 80
    spike_limit_percentage: 25
receivers:
  otlp/processor-traces-spansgrph:
    protocols:
      grpc:
        endpoint: ${env:MY_POD_IP}:4317
        max_recv_msg_size_mib: 12
      http:
        endpoint: ${env:MY_POD_IP}:4318
service:
  extensions:
  - health_check
  pipelines:
    metrics/servicegraph:
      exporters:
      - otlphttp/vm-default-processor-servicegraph
      processors:
      - batch/servicegraph
      receivers:
      - servicegraph
    metrics/spanmetrics:
      exporters:
      - prometheusremotewrite/vm-default-processor-spanmetrics
      processors:
      - batch/spanmetrics
      receivers:
      - spanmetrics
    traces/connector-pipeline:
      exporters:
      - spanmetrics
      - servicegraph
      processors:
      - batch
      receivers:
      - otlp/processor-traces-spansgrph
  telemetry:
    metrics:
      address: ${env:MY_POD_IP}:8888
'''

Log output

No response

Additional context

image image
@meSATYA meSATYA added bug Something isn't working needs triage New item requiring triage labels Dec 2, 2024
Copy link
Contributor

github-actions bot commented Dec 2, 2024

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@bacherfl
Copy link
Contributor

bacherfl commented Dec 6, 2024

(Triage): Removing needs-triage label and adding waiting-for-codeowners as the issue description seems clear and includes the necessary instructions and config on how to reproduce the problem

@bacherfl
Copy link
Contributor

bacherfl commented Dec 9, 2024

I just had a closer look at this, and I am not sure the spike can be avoided by considering the start timestamp of the received traces. As the calls_total metric is a simple counter for the traces the connector received. As this counter is maintained in memory, it will start at 0 after a restart, and will receive the traces which have been accumulated by the other collector, once it's available again. This would then explain the spike after the restart.
QQ to the code owners - could this be mitigated by persisting the metric values of the connector, using a storage extension, i.e. similar to how the filelogreceiver can persist the read offset across restarts of the collector?

@meSATYA
Copy link
Author

meSATYA commented Jan 2, 2025

Hi, do we plan for implementing the counter concept for spanmetrics & servicegraph?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants