Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Prometheus Remote Write Exporter] Forward metrics labels #10115

Closed
clouedoc opened this issue May 17, 2022 · 10 comments · Fixed by #11860
Closed

[Prometheus Remote Write Exporter] Forward metrics labels #10115

clouedoc opened this issue May 17, 2022 · 10 comments · Fixed by #11860
Labels
comp:prometheus Prometheus related issues priority:p3 Lowest

Comments

@clouedoc
Copy link
Contributor

Is your feature request related to a problem? Please describe.
I want to be able to cluster my metrics by host in Prometheus.
On Prometheus' side, I do not get a host label to select:

image

I only get a job tag.

Previously, by using Datadog and the otlp exporter, I could aggregate metrics by host name, deployment version, etc.
I believe that OTLP tags are not getting forwarded because of the Prometheus Remote Write exporter.

Describe the solution you'd like
I want to get the following labels to be forwarded to Prometheus:

  • service.name
  • deployment.environment
  • service.version

These will allow me to see if a specific version of my program is using more memory, and when.

Describe alternatives you've considered

  • using an official Prometheus Go client

Additional context

Here is my configuration:

receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:8080
      grpc:
        endpoint: 0.0.0.0:4040

exporters:
  logging:

  # Data sources: traces, metrics
  # On-premise endpoint
  otlphttp:
    endpoint: XXX

  otlp/grafana-cloud-tempo:
    endpoint: tempo-us-central1.grafana.net:443
    headers:
      authorization: XXX
  prometheusremotewrite/grafana-cloud:
    endpoint: https://prometheus-prod-10-prod-us-central-0.grafana.net/api/prom/push
    headers:
      authorization: XXX
processors:
  batch:
    timeout: 10s

extensions:
  health_check:
  pprof:
    endpoint: :1888
  zpages:
    endpoint: :55679

service:
  extensions: [pprof, zpages, health_check]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, otlphttp, otlp/grafana-cloud-tempo]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, otlphttp, prometheusremotewrite/grafana-cloud]
@dmitryax
Copy link
Member

cc @Aneurysm9 as code owner

@dmitryax dmitryax added the priority:p3 Lowest label May 20, 2022
@clouedoc
Copy link
Contributor Author

I managed to solve my specific issue by using a prometheus exporter and starting up a sibling Prometheus instance that scrapes opentelemetry-collector and writes to Grafana Cloud.
Note: I had to activate an option in the prometheus exporter to convert resource labels into Prometheus tags.

@dmitryax
Copy link
Member

@clouedoc is this still an issue with Prometheus Remote Write Exporter or we can close it?

@clouedoc
Copy link
Contributor Author

@dmitryax This is still an issue with Prometheus Remote Write Exporter that should be addressed, at least in the docs.
It would be interesting to have @Aneurysm9's take on it.
You can close if it clutters the tracker; I consider my job done as long as future explorers can find this thread 😁

@dashpole dashpole added the comp:prometheus Prometheus related issues label May 25, 2022
@gouthamve
Copy link
Member

Hi, this is now done using the target_info metric: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/data-model.md#resource-attributes-1

Please let us know if target_info is not available or doesn't work for your use-case.

@jack78901
Copy link

Could this be an issue where there is an undocumented feature of the Prometheus Remote Write exporter (through the exporter helper) that does the same thing as the Prometheus Exporter?

I recently ran into the same issue where I needed certain labels that were appearing as resource attributes but were not getting properly added to the actual metrics as data point attributes (read that as Prometheus labels).

I was able to add the same thing as @clouedoc did for the Prometheus Exporter right on the Prometheus Remote Writer. Namely:

    exporters:
      prometheusremotewrite:
        endpoint: "https://example.com"
        resource_to_telemetry_conversion:
          enabled: true

The problem with the Target_info metric is it does not actually associate with metrics (such as system_cpu_utilization from the hostmetrics receiver) in any way, which makes it impossible to see how the CPU is doing for a particular host.

@clouedoc
Copy link
Contributor Author

@jack78901 really interesting finding; this removes the need for a Prometheus intermediate server altogether. Thank you for reporting it.

@gouthamve I have to admit that I have a hard time understanding what target_info is from a first read. I built my alerting system on the configuration I mentioned in my earlier comments, so I do not want to break it unnecessarily, but I'm interested in how you would approach this problem with target_info. My use case also involves collecting CPU metrics so I'm not sure if this would solve it. Thank you for bringing this property to my attention

@dmitryax
Copy link
Member

dmitryax commented Jul 1, 2022

@clouedoc @gouthamve do we still need any doc updates to highlight this config option or the ticket can be closed?

clouedoc added a commit to clouedoc/opentelemetry-collector-contrib that referenced this issue Jul 1, 2022
@clouedoc
Copy link
Contributor Author

clouedoc commented Jul 1, 2022

Hello @dmitryax, I proposed a doc update via #11860.

@dmitryax
Copy link
Member

dmitryax commented Jul 1, 2022

Thanks @clouedoc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:prometheus Prometheus related issues priority:p3 Lowest
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants