Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenTelemetry Collector Prometheus exporter fails with "was collected before with the same name and label values" #33310

Closed
mircea-lemnaru-aera opened this issue May 30, 2024 · 4 comments

Comments

@mircea-lemnaru-aera
Copy link

mircea-lemnaru-aera commented May 30, 2024

Component(s)

Open Telemetry Collector prometheus exporter

Describe the issue you're reporting

Hi All,

I have a java service deployed in k8s with a opentelemetry collector attached as sidecar to export the metrics from the application.
I have configured prometheus to scrape the pod via a ServiceMonitor in the same namespace.

All seems to be good , but when I look in the logs I see the following error.

"metric XXX was collected before with the same name and label values"

2024-05-30T14:42:12.390Z	error	prometheusexporter@v0.96.0/log.go:23	error gathering metrics: collected metric "http_client_duration" { label:{name:"container_id"  value:"9fc9d223e2e87e1ac60e3c02fe8430f865fbb479adb0f8540cbd1aafaf5fd721"}  label:{name:"host_arch"  value:"amd64"}  label:{name:"host_name"  value:"events-collector-57b85cc576-vh7br"}  label:{name:"http_method"  value:"GET"}  label:{name:"http_status_code"  value:"200"}  label:{name:"job"  value:"events-collector"}  label:{name:"k8s_container_name"  value:"events-collector"}  label:{name:"k8s_deployment_name"  value:"events-collector"}  label:{name:"k8s_namespace_name"  value:"platform"}  label:{name:"k8s_node_name"  value:"aks-aerasvc3-35271952-vmss000004"}  label:{name:"k8s_pod_name"  value:"events-collector-57b85cc576-vh7br"}  label:{name:"k8s_replicaset_name"  value:"events-collector-57b85cc576"}  label:{name:"net_peer_name"  value:"config-server-svc.central-services"}  label:{name:"net_protocol_name"  value:"http"}  label:{name:"net_protocol_version"  value:"1.1"}  label:{name:"os_description"  value:"Linux 5.15.0-1051-azure"}  label:{name:"os_type"  value:"linux"}  label:{name:"service_name"  value:"events-collector"}  label:{name:"service_version"  value:"2.6.0-main-b141"}  label:{name:"telemetry_auto_version"  value:"1.32.1"}  label:{name:"telemetry_sdk_language"  value:"java"}  label:{name:"telemetry_sdk_name"  value:"opentelemetry"}  label:{name:"telemetry_sdk_version"  value:"1.34.1"}  histogram:{sample_count:1  sample_sum:117.409334  bucket:{cumulative_count:0  upper_bound:0}  bucket:{cumulative_count:0  upper_bound:5}  bucket:{cumulative_count:0  upper_bound:10}  bucket:{cumulative_count:0  upper_bound:25}  bucket:{cumulative_count:0  upper_bound:50}  bucket:{cumulative_count:0  upper_bound:75}  bucket:{cumulative_count:0  upper_bound:100}  bucket:{cumulative_count:1  upper_bound:250}  bucket:{cumulative_count:1  upper_bound:500}  bucket:{cumulative_count:1  upper_bound:750}  bucket:{cumulative_count:1  upper_bound:1000}  bucket:{cumulative_count:1  upper_bound:2500}  bucket:{cumulative_count:1  upper_bound:5000}  bucket:{cumulative_count:1  upper_bound:7500}  bucket:{cumulative_count:1  upper_bound:10000}}} was collected before with the same name and label values
	{"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
	github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter@v0.96.0/log.go:23
github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
	github.com/prometheus/client_golang@v1.19.0/prometheus/promhttp/http.go:144
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	net/http/server.go:2514
go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
	go.opentelemetry.io/collector/config/confighttp@v0.96.0/compression.go:160
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP
	go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.49.0/handler.go:225
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1
	go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.49.0/handler.go:83
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2136
go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
	go.opentelemetry.io/collector/config/confighttp@v0.96.0/clientinfohandler.go:26
net/http.serverHandler.ServeHTTP
	net/http/server.go:2938
net/http.(*conn).serve
	net/http/server.go:2009

I tried almost everything but nothing seems to fix it. From the error it seems that I have an exact same metrics that was collected before and because of that its failing do collect it again.

Any idea where I can look ? maybe at prometheus side somewhere ? Or is there a way I can instruct Prometheus to ignore this duplicate and just go fwd with the scrape ?

Thanks for the help
Mircea

@mircea-lemnaru-aera mircea-lemnaru-aera added the needs triage New item requiring triage label May 30, 2024
Copy link
Contributor

Pinging code owners for exporter/prometheus: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

Related: #24054

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants