Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/prometheus] does not show metrics from otlp receiver #32552

Open
sterziev88 opened this issue Apr 19, 2024 · 6 comments
Open

[exporter/prometheus] does not show metrics from otlp receiver #32552

sterziev88 opened this issue Apr 19, 2024 · 6 comments
Assignees
Labels
exporter/prometheus question Further information is requested

Comments

@sterziev88
Copy link

Component(s)

No response

What happened?

Description

I use otlp receiver to collect metrics from my applications and I want to use prometheus exporter in order to be able to see them in prometheus but I don't see them.

Steps to Reproduce

delpoy otlp with helm chart 0.85.0

Expected Result

To be able to see metrics in Prometheus

Actual Result

I am not able to see metrics as system_cpu_usage, system_load_average, etc..

Collector version

0.85.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
        otlp:
          protocols:
            grpc:
              endpoint: x.x.x.x:4317
            http:
              endpoint: x.x.x.x:4318
exporters:
        logging: 
          verbosity: detailed
        prometheus:
          endpoint: "0.0.0.0:8889"
          send_timestamps: true
          resource_to_telemetry_conversion:
            enabled: true

service:
          metrics:
            exporters:
              - prometheus
              - logging

Log output

2024-04-19T06:36:44.203Z info MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 7, "data points": 11}
2024-04-19T06:36:44.203Z info ResourceMetrics #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.23.1
Resource attributes:
-> container.id: Str(e9673229043eab10ccc347a7bc6de2741b78162235dd864efc8f20e1934283cd)
-> host.arch: Str(amd64)
-> host.name: Str(scheduled-services-deployment-5675c8ccc4-qcmql)
-> os.description: Str(Linux 5.10.210-201.852.amzn2.x86_64)
-> os.type: Str(linux)
-> process.command_args: Slice(["/layers/paketo-buildpacks_bellsoft-liberica/jre/bin/java","org.springframework.boot.loader.launch.JarLauncher"])
-> process.executable.path: Str(/layers/paketo-buildpacks_bellsoft-liberica/jre/bin/java)
-> process.pid: Int(1)
-> process.runtime.description: Str(BellSoft OpenJDK 64-Bit Server VM 17.0.7+7-LTS)
-> process.runtime.name: Str(OpenJDK Runtime Environment)
-> process.runtime.version: Str(17.0.7+7-LTS)
-> service.name: Str(billing-scheduler)
-> telemetry.distro.name: Str(opentelemetry-spring-boot-starter)
-> telemetry.distro.version: Str(2.2.0-alpha)
-> telemetry.sdk.language: Str(java)
-> telemetry.sdk.name: Str(opentelemetry)
-> telemetry.sdk.version: Str(1.36.0)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope io.opentelemetry.sdk.logs
Metric #0
Descriptor:
-> Name: processedLogs
-> Description: The number of logs processed by the BatchLogRecordProcessor. [dropped=true if they were dropped due to high throughput]
-> Unit: 1
-> DataType: Sum
-> IsMonotonic: true
-> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
-> dropped: Bool(false)
-> processorType: Str(BatchLogRecordProcessor)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 44
Metric #1
Descriptor:
-> Name: queueSize
-> Description: The number of items queued
-> Unit: 1
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> processorType: Str(BatchLogRecordProcessor)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 0
ScopeMetrics #1
ScopeMetrics SchemaURL:
InstrumentationScope io.opentelemetry.exporters.otlp-grpc
Metric #0
Descriptor:
-> Name: otlp.exporter.exported
-> Description:
-> Unit:
-> DataType: Sum
-> IsMonotonic: true
-> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
-> success: Bool(false)
-> type: Str(log)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 11
NumberDataPoints #1
Data point attributes:
-> success: Bool(false)
-> type: Str(span)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 40
NumberDataPoints #2
Data point attributes:
-> success: Bool(true)
-> type: Str(log)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 44
NumberDataPoints #3
Data point attributes:
-> success: Bool(true)
-> type: Str(span)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 395003
Metric #1
Descriptor:
-> Name: otlp.exporter.seen
-> Description:
-> Unit:
-> DataType: Sum
-> IsMonotonic: true
-> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
-> type: Str(log)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 55
NumberDataPoints #1
Data point attributes:
-> type: Str(span)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 395043
ScopeMetrics #2
ScopeMetrics SchemaURL:
InstrumentationScope io.opentelemetry.sdk.trace
Metric #0
Descriptor:
-> Name: processedSpans
-> Description: The number of spans processed by the BatchSpanProcessor. [dropped=true if they were dropped due to high throughput]
-> Unit: 1
-> DataType: Sum
-> IsMonotonic: true
-> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
-> dropped: Bool(false)
-> processorType: Str(BatchSpanProcessor)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 395003
Metric #1
Descriptor:
-> Name: queueSize
-> Description: The number of items queued
-> Unit: 1
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> processorType: Str(BatchSpanProcessor)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Value: 0
ScopeMetrics #3
ScopeMetrics SchemaURL:
InstrumentationScope io.opentelemetry.spring-webmvc-6.0 2.2.0-alpha
Metric #0
Descriptor:
-> Name: http.server.request.duration
-> Description: Duration of HTTP server requests.
-> Unit: s
-> DataType: Histogram
-> AggregationTemporality: Cumulative
HistogramDataPoints #0
Data point attributes:
-> http.request.method: Str(GET)
-> http.response.status_code: Int(302)
-> network.protocol.version: Str(1.1)
-> url.scheme: Str(http)
StartTimestamp: 2024-04-16 13:37:44.07905719 +0000 UTC
Timestamp: 2024-04-19 06:36:44.083167789 +0000 UTC
Count: 12309
Sum: 6.423931
Min: 0.000430
Max: 0.005979
ExplicitBounds #0: 0.005000
ExplicitBounds #1: 0.010000
ExplicitBounds #2: 0.025000
ExplicitBounds #3: 0.050000
ExplicitBounds #4: 0.075000
ExplicitBounds #5: 0.100000
ExplicitBounds #6: 0.250000
ExplicitBounds #7: 0.500000
ExplicitBounds #8: 0.750000
ExplicitBounds #9: 1.000000
ExplicitBounds #10: 2.500000
ExplicitBounds #11: 5.000000
ExplicitBounds #12: 7.500000
ExplicitBounds #13: 10.000000
Buckets #0, Count: 12308
Buckets #1, Count: 1
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 0
Buckets #5, Count: 0
Buckets #6, Count: 0
Buckets #7, Count: 0
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 0
Buckets #11, Count: 0
Buckets #12, Count: 0
Buckets #13, Count: 0
Buckets #14, Count: 0

Additional context

That is my config in Prometheus:
- job_name: 'sample-job-2'
scrape_interval: 10s
static_configs:
- targets: ['opentelemetry-dev-in-cluster.opentelemetry.svc.cluster.local:8889']

I am able to see that that target is successfully added in Prometheus, according the logs I see that opentelemtry collector successfully receive metrics from my application via otlp protocol but I don't see metrics in my prometheus.

@sterziev88 sterziev88 added bug Something isn't working needs triage New item requiring triage labels Apr 19, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@odev-swe
Copy link

@sterziev88 same situation here

@slashrsm
Copy link

slashrsm commented Jul 3, 2024

I am experiencing the same problem. Was anyone able to figure this out?

Copy link
Contributor

github-actions bot commented Sep 2, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

I would recommend using scrape metrics to debug this, such as up, and scrape_series_added. That should help you tell whether or not prometheus was able to scrape the endpoint, and how many series it added.

You can also curl the collector yourself if you kubectl port-forward the service you are querying. Check the text output of the endpoint to make sure the metrics you expect are there.

@dashpole
Copy link
Contributor

dashpole commented Oct 9, 2024

Sorry, it looks like you are not using the OTLP receiver in your collector configuration. Do something like:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: x.x.x.x:4317
      http:
        endpoint: x.x.x.x:4318
exporters:
  logging: 
    verbosity: detailed
  prometheus:
    endpoint: "0.0.0.0:8889"
    send_timestamps: true
    resource_to_telemetry_conversion:
      enabled: true

service:
    metrics:
      receivers:
        - otlp
      exporters:
        - prometheus
        - logging

@dashpole dashpole removed the needs triage New item requiring triage label Oct 9, 2024
@dashpole dashpole self-assigned this Oct 9, 2024
@dashpole dashpole added question Further information is requested and removed bug Something isn't working labels Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exporter/prometheus question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants