-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Span export fails with spanmetrics connector in a pipeline #23151
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Pinging code owners for connector/spanmetrics: @albertteoh @kovrus. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
connector/forward
What happened?
Description
Span export frequently fails if traces are forwarded by connector to multiple (2 or more) pipelines containing span transformations.
Steps to Reproduce
See collector pipeline configuration.
Main highlights are:
otlp
receiver for incoming external data;otlp
receiver and 2 exporters:otlp/spanlogs
(forwarding traces to external service) andforward/sanitize-metrics
;forward/sanitize-metrics
receiver, transform processor andspanmetrics
exporter;spanmetrics
receiver, routing processor and 2 prometheusremotewrite exporters;Expected Result
The
forward/sanitize-metrics
pipeline is meant to sanitize span resource record before reachingspanmetrics
connector, as otherwise internally it will create a label for every span resource entry disregarding all label config options and then labels dropped along the way resulting into metric collisions, counter resets and so on, whileotlp/spanlogs
forwarding spans as is, without any additional processing.Actual Result
This configuration results in
otlp/spanlogs
exporter failure, probably because of transform processor executed inforward/sanitize-metrics
pipeline during the export process, so it looks a lot like race condition:2023-05-04T11:04:44.014Z error exporterhelper/queued_retry.go:401 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "traces", "name": "otlp/spanlogs", "error": "Permanent error: rpc error: code = Internal desc = grpc: error unmarshalling request: proto: ExportTraceServiceRequest: illegal tag 0 (wire type 0)", "dropped_items": 564} go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send go.opentelemetry.io/collector/exporter@v0.76.1/exporterhelper/queued_retry.go:401 go.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send go.opentelemetry.io/collector/exporter@v0.76.1/exporterhelper/traces.go:137 go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1 go.opentelemetry.io/collector/exporter@v0.76.1/exporterhelper/queued_retry.go:205 go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func1 go.opentelemetry.io/collector/exporter@v0.76.1/exporterhelper/internal/bounded_memory_queue.go:58
removing
batch
processor fromtraces
pipeline reduces amount of dropped spans by orders of magnitude, but there is still constant data loss, and removing transform processor fromtraces/sanitize
pipeline resolves the issue.Collector version
0.77.0
Environment information
Environment
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: