Manual rules/remaps for batching/buffering? #23981
Replies: 2 comments
-
|
Hi @target-san, this reminds me of this answer I gave in a previous discussion. Setting the following should be ideal: The http sink is meant to batch many events as one, and has some special handling with the json codec and default framing to format events like so: If you want to you could maybe try to batch multiple requests in one by adding another transform (like reduce, window or throttle) before |
Beta Was this translation helpful? Give feedback.
-
The latest Vector version added OTLP support: https://vector.dev/highlights/2025-09-23-otlp-support/. However, I would recommend using the nightly version to experiment and for the next release (guide will be updated) to get an even better out of the box experience in prod environments. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Question
Hello!
Vector's default implementation for opentelmetry/http sinks automatically batches JSON records into JSON array, which doesn't seem to be correct OTLP protocol. The best I could do is disable framing completely and limit max events per send to 1. Is it possible to somehow specify how batching should be performed? Or maybe there are some examples how to do manual batching via "reduce" transformer? The best I can guess is to remap events, then reduce them, then remap again into OTLP payload, then send via raw HTTP.
Thanks
P.S: Two VRL files mentioned in remap serve to first transform Rust's
tracingJSON logs into more convenient form and then transform that form into OTLP payloads. Adding them just for completeness, though they don't affect question directlyVector Config
config.yaml:
rust-logs.vrl
otlp-payload.vrl:
Vector Logs
No response
Beta Was this translation helpful? Give feedback.
All reactions