-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kafkaexporter: Keying log and metric data #30666
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This mostly duplicates #29433 and #31675, but each have slightly different proposals and scope. Adding references so it's easier to track. @bgranetzke This is being implemented for metrics in #31315. If you're able, could you provide input there on the proposed solution for metrics? It would be good to know if the PR works for you or not. |
@crobert-1 That will work for me if we do the same for logs as well. |
**Description:** Add resource attributes based partitioning for OTLP metrics In our backend we really need an ability to distribute metrics based on resource attributes. For this I added additional flag to the configuration. Some code from traces partitioning by traceId reused. Judging by issues, this feature is anticipated by several more people. **Link to tracking Issue:** [31675](#31675) Additionally this feature was menioned in these issues: [29433](#29433), [30666](#30666) **Testing:** Added tests for hashing utility. Added tests for marshalling and asserting correct keys and the number of messages. Tested locally with host metrics and chained OTLP metrics receiver. **Documentation:** Changelog entry Flag is added to the doc of KafkaExporter --------- Co-authored-by: Curtis Robert <crobert@splunk.com>
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
exporter/kafka
Is your feature request related to a problem? Please describe.
Prior to #27583, I was able to extend the marshaling logic to add a key to log/metric/trace data. I understand the reason for making With*Marshaler functions non-exported, but I'm trying to determine the best way to move past being version-locked.
Describe the solution you'd like
I'm looking for a discussion, but my starting point is a solution that is similar to the
partition_traces_by_id
option. I'd like to addpartition_logs_by_resource
andpartition_metrics_by_resource
configuration options.The logic for both would essentially call
pdatautil.MapHash(pcommon.Resource.Attributes())
and use that for the partition key. Also, pmetric.Metrics/plog.Logs will also need to be split into a sarama.ProducerMessage for each ResourceMetrics/ResourceLogs.Describe alternatives you've considered
I also considered an external Kafka Streams re-keying process that would consume the unkeyed topic and re-send to a keyed topic. I think this works for most of the similar requests for custom marshaler support, but I think message keying is different than a custom format of the payload. It is a less favored solution because the further away from ingestion we get, the less we can trust ordering.
Another possible option is to fabricate a resource attribute on the agent side and then have a configuration on the kafka exporter for which attribute to use for the message key. The exporter will still need to break up the payload into multiple ProducerMessages on Resource boundaries. This is my least favorite option. This is similar to #29433, but I'm not a fan of the proposed solution there because I personally would not be able to pick a single, "naturally-occurring" attribute that would be unique enough across all my telemetry producers.
Additional context
No response
The text was updated successfully, but these errors were encountered: