-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grouped metric type "counter" silently dropped #34263
Labels
Comments
ole-kaas
added
bug
Something isn't working
needs triage
New item requiring triage
labels
Jul 26, 2024
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Can you paste the full metric output from the endpoint? We can use that to run it as a unit test. Can you enable debug logging as well to see if you get any additional details? Should be: service:
telemetry:
logs:
level: DEBUG |
Find below full dump from metrics endpoint. The debug logging didn't reveal anything.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Component(s)
receiver/prometheus
What happened?
Description
We have deployed Target Allocator to discover pod/service-monitors instead of Promtheus Operator. The metrics endpoints are discoverered, but some of the metrics are missing. It seems that all grouped metrics of type "counter" are silently dropped, while grouped metrics of type "gauge" works as expected together with all the other metrics.
Example of dropped metric:
Example from same endpoint that works as expected:
Steps to Reproduce
We are using cloudnative-pg operator to deploy psql. Example metrics are from db clusters deplyed by the operator
https://artifacthub.io/packages/helm/cloudnative-pg/cloudnative-pg/0.21.4
Expected Result
That the metrics would be available OR some mention in the log that the metrics for some reason could not be scraped/processed.
Actual Result
Metric missing and no mention in the log
Collector version
0.103.0
Environment information
Environment
Azure Kubernetes 1.29.5
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: