-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporterhelper] Default queue size is too large #7359
Comments
From the SIG meeting at 26.04.2023:
|
Fixed by #7592 |
I believe it's not resolved yet. 1000 was the first step, but we want to reduce it further |
As per the SIG discussion on 21.06.2023, we'll leave this value alone, as it's likely to be replaced by a different key as we move batching to exporterhelper. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
Exporterhelper has a default queue size of 5000. If we use the batch processor with default settings as well, that results in a maximum size of 5000 * 8192 ~= 41 000 000 spans. At a conservative 500 bytes per span, we'd consume ~20 GB of memory, and most likely be killed by either the orchestrator or the OS itself.
This is way too much in my opinion. The default size of this queue should allow for enough buffer space to absorb temporary increases in data volume and to feed the 10 consumer threads without issue. It shouldn't try to cover for a prolonged remote unavailability - if the user wants that, they should probably use the persistent queue instead.
Describe the solution you'd like
I'd like the value to be reduced to the low 100s, both 100 and 200 seem fine given the above napkin math.
Describe alternatives you've considered
If the default stays as is, I think we should include a warning about this in the documentation.
Additional context
This is especially dangerous given that the sending queue is enabled by default for a lot of exporters, including otlpexporter.
The text was updated successfully, but these errors were encountered: