Skip to content

OTLP handler inconsistent behavior with Prometheus OTLP handler #6236

Closed
@yeya24

Description

@yeya24

Describe the bug
Not exactly a bug. It is more a discrepancy between Cortex OTLP handler and Prometheus' OTLP handler. They don't have the same behavior today.

Prometheus OTLP handler has the following configuration enabled https://github.com/prometheus/prometheus/blob/main/storage/remote/write_handler.go#L515

It doesn't convert resource attributes to metric labels automatically but instead use PromoteResourceAttributes to specify which resource attributes to promote to labels. Meanwhile, it enables target_info metric which contains the resource attributes info.

	annots, err := converter.FromMetrics(r.Context(), req.Metrics(), otlptranslator.Settings{
		AddMetricSuffixes:         true,
		PromoteResourceAttributes: otlpCfg.PromoteResourceAttributes,
	})

Cortex is doing the opposite. It disables target_info metric and automatically convert all resource attributes to metric labels https://github.com/cortexproject/cortex/blob/master/pkg/util/push/otlp.go#L41. This could cause cardinality to blow up and cause issues for users who migrate from Prometheus to Cortex.

		setting := prometheusremotewrite.Settings{
			AddMetricSuffixes: true,
			DisableTargetInfo: true,
		}

Proposal

  • Make the default behavior the same as what Prometheus does. Introduce PromoteResourceAttributes Feature Request: OTel resource attribute promotion #6110 and enable target_info metric
  • Add an option to retain the existing behavior to always convert all metric attributes
  • Enable target_info metric or not can be a separate feature flag.

Metadata

Metadata

Assignees

Type

No type

Projects

Status

No status

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions