You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[connector/spanmetrics] SpanMetrics & ServiceGraph should consider the startTimeStamp from the span data itself and not the timeStamp when the collector receives the span
#36613
Open
meSATYA opened this issue
Dec 2, 2024
· 4 comments
As the spanmetrics & servicegraph conenctors consider the timestamp of the spans when the collector receives it, the calls_total metric shows incorrect values when the collector pods are restarted. This is because the collector finds that the number of spans it receives it increased because of the more number of spans being sent to the collector when it is restarted. But, this is not the right indication of the increase in number of calls to a particular service.
Steps to Reproduce
Restart the collector pods, and then observe a spike in the calls_total metric even when the calls to the backend service didn't actually rise. The collector is deployed as a statefulset and it receives the spans from another collector using loadbalancer exporter with "service.name" as routing_key.
Expected Result
There shouldn't be any spike in rate(calls_total) metric when collector is restarted.
Actual Result
There is a spike appears in rate(calls_total) metric when collector pods are restarted.
(Triage): Removing needs-triage label and adding waiting-for-codeowners as the issue description seems clear and includes the necessary instructions and config on how to reproduce the problem
I just had a closer look at this, and I am not sure the spike can be avoided by considering the start timestamp of the received traces. As the calls_total metric is a simple counter for the traces the connector received. As this counter is maintained in memory, it will start at 0 after a restart, and will receive the traces which have been accumulated by the other collector, once it's available again. This would then explain the spike after the restart.
QQ to the code owners - could this be mitigated by persisting the metric values of the connector, using a storage extension, i.e. similar to how the filelogreceiver can persist the read offset across restarts of the collector?
Component(s)
connector/spanmetrics
What happened?
Description
As the spanmetrics & servicegraph conenctors consider the timestamp of the spans when the collector receives it, the calls_total metric shows incorrect values when the collector pods are restarted. This is because the collector finds that the number of spans it receives it increased because of the more number of spans being sent to the collector when it is restarted. But, this is not the right indication of the increase in number of calls to a particular service.
Steps to Reproduce
Restart the collector pods, and then observe a spike in the calls_total metric even when the calls to the backend service didn't actually rise. The collector is deployed as a statefulset and it receives the spans from another collector using loadbalancer exporter with "service.name" as routing_key.
Expected Result
There shouldn't be any spike in rate(calls_total) metric when collector is restarted.
Actual Result
There is a spike appears in rate(calls_total) metric when collector pods are restarted.
Collector version
0.114.0 or earlier
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
No response
Additional context
The text was updated successfully, but these errors were encountered: