-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[telemetrySettting] Create sampled Logger #8134
[telemetrySettting] Create sampled Logger #8134
Conversation
|
Hi @bogdandrutu, could you take a look at the PR please? |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @antonjim-te. It looks good to me. Just minor comments
I'd like to have another look from any of @open-telemetry/collector-approvers before merging |
The original logger is sampled by default now after open-telemetry/opentelemetry-collector#8134
The original logger is now sampled by default after open-telemetry/opentelemetry-collector#8134
…7447) The original logger is sampled by default now after open-telemetry/opentelemetry-collector#8134
…lemetry#27448) The original logger is now sampled by default after open-telemetry/opentelemetry-collector#8134
Issue:
In one of the use cases which we are working on, we have identified that
sampledLogger
mechanism used in the Opentelemetry exporters (otlpexporter
andotlphttpexporter
), could become a memory bottleneck, when the number of exports in the same pipeline is high (we are talking about 100s of instances of exporters running on the same machine).The sampledLogger is created inside the exporterhelper module and the sampling is always activated unless the collector is in debug mode:
Investigation:
The high memory usage has been discovered using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/extension/pprofextension
In the first scenario, default configuration with the
sampledLogger
active, based on the profiled data we could observe that sampling mechanism was responsible for allocating aprox. 75% of the memory of the running collector:In the second scenario, with the same number of running exporters, but with the
sampledLogger
mechanism disabled (usingdebug
level configuration check configuration below), we were able to observe a much smaller memory footprint:Given that the logging sampling mechanism could become a bottleneck in certain scenarios (our case), we would like to propose a contribution, which potentially can improve the memory footprint of the exporters in certain use cases (potentially making it more scalable)
Proposal:
Create an extra sampled logger in the telemetry configuration used to avoid flooding the logs with messages that are repeated frequently
The sampled logger is configured by
LogsSamplingConfig
Default configuration for the sampled logger