-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Significant Memory Increase with OpenTelemetry Leading to OoMKilled Issues on Kubernetes #5461
Comments
This ask is rather vague. We do have benchmarks that track allocations though. Investigating this would require looking into what exactly is using memory within your application. |
I think it would be an overkill. You can always read the codebase.
We cannot do anything without repro steps or profiling data. |
Please provide the example code that you used to generate that graphic. I mean not aware of a function in this project called |
Has anyone found a solution for this issue? |
Just incase someone runs into this problem. I'm not 100% the exact cause, but the resource limits memory 32 megs. I think it needs to be bumped. I up'ed it and it worked. It took me a while to figure out HOW to bump the autoinstrumentation go sidecar. In your Instrumentation manifest, add a go section under the spec object:
|
Hello,
Our company is currently using the latest version of OpenTelemetry Go 1.27.0. After implementing OpenTelemetry to record metrics, we noticed a significant increase in memory usage in our pods deployed on Kubernetes, leading to OoMKilled issues. Could you please provide us with any documentation or knowledge regarding how OpenTelemetry manages memory?
Thank you.
The text was updated successfully, but these errors were encountered: