Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to set resource service.name while scraping kubernetes pods #7831

Closed
rtorres33 opened this issue Feb 11, 2022 · 5 comments
Closed

Unable to set resource service.name while scraping kubernetes pods #7831

rtorres33 opened this issue Feb 11, 2022 · 5 comments
Labels
comp:prometheus Prometheus related issues enhancement New feature or request

Comments

@rtorres33
Copy link

Prometheus receiver defaults to the scrape_configs job_name as the metric service.name.

Steps to reproduce

  1. Create OpenTelemetryCollector with the following configuration:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
spec:
  mode: daemonset
  config: |
    receivers:
      prometheus:
        config:
          scrape_configs:
            - job_name: kubernetes-pods
              scrape_interval: 10s
              kubernetes_sd_configs:
                - role: pod
              relabel_configs:
                - action: keep
                  regex: true
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                - action: replace
                  regex: (.+)
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
                  target_label: __scheme__
                - action: replace
                  regex: (.+)
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
                  target_label: __metrics_path__
                - action: replace
                  regex: (.+)
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
                  target_label: __metrics_port__
    exporters:
      logging:
        logLevel: debug
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          processors: []
          exporters: [logging]
  1. Open the collector console and you will see the following metrics:
Resource SchemaURL: 
Resource labels:
     -> service.name: STRING(kubernetes-pods)
     -> host.name: STRING(1.2.3.4)
     -> scheme: STRING(http)
     -> job: STRING(kubernetes-pods)
     -> instance: STRING(1.2.3.4:8080)
     -> port: STRING(8080)
InstrumentationLibraryMetrics #0
InstrumentationLibraryMetrics SchemaURL: 
InstrumentationLibrary  
Metric #0
Descriptor:
     -> Name: jvm_gc_pause_seconds_max
     -> Description: Time spent in GC pause
     -> Unit: 
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> action: STRING(end of major GC)
     -> application: STRING(app-1)
     -> cause: STRING(Allocation Failure)
     -> version: STRING(1.2.3)

What did you expect to see?

Resource labels:
     -> service.name: STRING(<DEFINING SERVICE NAME FROM POD LABEL>)
     -> host.name: STRING(1.2.3.4)
     -> scheme: STRING(http)
     -> job: STRING(kubernetes-pods)
     -> instance: STRING(1.2.3.4:8080)
     -> port: STRING(8080)

What version did you use?
Version: 0.41.0

What config did you use?
Config: see above

Environment
OS: Linux (EKS)

@rtorres33 rtorres33 added the bug Something isn't working label Feb 11, 2022
@jpkrohling
Copy link
Member

cc @Aneurysm9 @dashpole

@dashpole
Copy link
Contributor

You should be able to accomplish this using relabeling if you want. For example, to use the app.kubernetes.io/name label, you could add:

                - action: replace
                  source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
                  target_label: job

I believe this is working as intended in this case. Feel free to offer suggestions if you think there is a better way to do this.

@dashpole dashpole added comp:prometheus Prometheus related issues and removed bug Something isn't working labels Feb 15, 2022
@rtorres33
Copy link
Author

@dashpole When I add the lines you recommended, the collector does not output any metrics.

The new configuration looks like:

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
spec:
  mode: daemonset
  config: |
    receivers:
      prometheus:
        config:
          scrape_configs:
            - job_name: kubernetes-pods
              scrape_interval: 10s
              kubernetes_sd_configs:
                - role: pod
              relabel_configs:
                - action: keep
                  regex: true
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                - action: replace
                  regex: (.+)
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
                  target_label: __scheme__
                - action: replace
                  regex: (.+)
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
                  target_label: __metrics_path__
                - action: replace
                  regex: (.+)
                  source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
                  target_label: __metrics_port__
                - action: replace
                  source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
                  target_label: job
    exporters:
      logging:
        logLevel: debug
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          processors: []
          exporters: [logging]

However, when I replace job with any other word, like test, the collector outputs and adds the new label to the output:

                - action: replace
                  source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
                  target_label: test

It seems like job cannot be replaced.

@dashpole dashpole added the enhancement New feature or request label Feb 15, 2022
@dashpole
Copy link
Contributor

Ah, right. We need to fix that (#5663). We should definitely ensure it is possible to customize the service name.

@gouthamve
Copy link
Member

The issue has now been fixed. You can relabel the job label now :)

@dashpole I believe this can be closed as well!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:prometheus Prometheus related issues enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants