-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metric otelcol_processor_tail_sampling_count_traces_sampled showing wrong values based on policy, sampled labels #27567
Comments
Pinging code owners for processor/tailsampling: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
/label processor/tailsampling |
Pinging code owners for processor/tailsampling: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Oh, I guess this is a duplicate of #25882 🤦 |
@0x006EA1E5 Should we close this issue as a duplicate, or do you want to re-test on |
Looks like this is already fixed in 0.87.0, but I will check on Monday if that's okay? |
closing as fixed |
Component(s)
processor/tailsampling
What happened?
Description
For the tail sampling processor, metric
otelcol_processor_tail_sampling_count_traces_sampled
, we get the same value for every policy. I believe it should countsampled
=true
orfalse
differently for each policy, indicating if the policy matched a given traceSteps to Reproduce
Create a config file for the collector, with the
tail_sampling
processor with two distinct policies. For example, one forerror
traces, and one for high latency traces. Also enable the prometheus scrape endpoint. Something like:Send a low latency,
error
trace.Check the metrics endpoint e.g.,
curl localhost:8888/metrics | grep otelcol_processor_tail_sampling_count_traces_sampled
. You might have to wait a while to see something (thedecision_wait
time?)Eventually you should see two lines,
Note, for both policies,
sampled="true"
and the count is1
, even though only one policy should have matched.Now send a "high latency" trace, that should match the other policy.
When the metrics update, both policies (with label
sampled="true")
will read2
Expected Result
In the above scenario, when we first only send a single error trace, we should get the metric for
policy="sample-all-error-traces",sampled="true"
to be1
, andpolicy="sample-all-high-latency",sampled="false"
(notesampled
isfalse
) to be1
.Then when we send a high latency trace, we should see
4
lines, two for each each policy, withsampled="true"
andsampled="false"
Actual Result
For every trace that matches any policy (i.e., the trace is sampled), the counter for all policies
sampled="true"
increments.Likewise, for every trace that doesn't match any policy (i.e., the trace is not sampled), the counter for all policies
sampled="false"
increments.Collector version
0.86.0
Environment information
Environment
docker image /otel/opentelemetry-collector:0.86.0@sha256:b8733b43d9061e3f332817f9f373ba5bf59803e67edfc4e70f280cb0afb49dd5
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: