Concurrent batch processor features → batch processor #11248
+805
−386
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Contributes downstream functionality back to the core component.
Caveats Despite a year+ long effort to add equivalent and satisfactory batching support in the
exporterhelper
subcomponent, we still today lack support for back-pressure with batching and reliable error transmission. I believe it is time to say "Yes/And" here. I support the efforts to improveexporterhelper
and will contribute to that project myself, however I think repairing the legacy batch processor is also a good idea, given how long this has taken.The changes here were developed downstream, see https://github.com/open-telemetry/otel-arrow/blob/main/collector/processor/concurrentbatchprocessor/README.md
Link to tracking issue
I'm listing a number of issues that are connected with this, both issues pointing to unmet needs in the exporterhelper batcher and missing features in the legacy batcher. Accepting these changes will allow significantly improved batching support in the interim period until the new batching support is complete.
Fixes #11213 -- we have instrumented the batch processor for tracing, including span links
Part of #10825 -- until this features is complete, users who depend on
metadata_keys
in the batch processor will not be able to upgradePart of #10368 -- I see this as the root cause, we haven't been able to introduce concurrency to the exporterhelper w/o also introducing a queue, which interferes with error transmission
Fixes #8245 -- my original report about the problem solved here -- we add concurrency with batching and error transmission and do not depend on a queue (persistent or in-memory)
Part of #9591 -- users must use one of the available memory limiting mechanisms in conjunction with the batch processor
Part of #8122 -- until this is finished, users depend on the original batch processor
Part of #7460 -- another statement of #8245; the batch processor does not propagate errors today, and this fixes the batch processor's contribution to the problem.
Testing
New tests are included.
Documentation
TODO/WIP