-
Notifications
You must be signed in to change notification settings - Fork 564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
otelhttp grpc ResourceExhausted - resources not being freed #3536
Comments
Looks like the current context size limit is 4194304 (4 meg). Is this a context property you can set larger to say 10 meg, 10485760? |
IIRC that limit is on the receiving side. For the OTel collector that would be |
I don't think upping that limit would solve the root issue of the message continuing to grow on error and not releasing it's resources. |
The message keeps growing because of the failures. It can't release resources, as that data failed to be sent. |
@dmathieu so this is expected behavior? If so, I guess I would have expected it to have a limit on the amount of retries it does so it doesn't just keep growing indefinitely. Do you have any recommendations on next steps? I would like to prevent this situation from occurring, as we currently cannot use |
I'm not sure that the message size is growing because of export failures. Each export should be independent. Errors will be reported and then the next time the reader collects and exports metrics it starts collecting into a new Are you able to increase the limit on the collector side such that the metrics can be received correctly there and inspect what is being sent, or use the STDOUT exporter? This sounds like there may be a cardinality issue with some of the metrics collected. |
Sorry, but probably not anytime soon. Since we've only been able to reproduce this in production, we've since switched back to the prometheus client for metric collection and turned off the I can give some additional context in the meantime if it helps though. We had But these are the versions we were on when it worked:
Issue started occurring after one of the below upgrades, as mentioned before, we haven't spent the time to dissect which version caused it. Upgraded to:
And then about a week or so later:
|
I have run into the exact same issue as @KasonBraley. Previous versions of OTEL worked but v0.34.0 does not. |
I did some digging into this and found that the growth of I created a custom exporter and logged out the metrics in our production environment and noticed that for each RPC a new attribute was created for
I put ***** around things that are specific to my company. Maybe this is an issue specific to AWS/cloud environments since the port seems to be different on each call? I'm willing to make the change necessary if we can come to some agreement on how to proceed. Can we simply make the metric customizable? Some sort of optional @Aneurysm9 please advise. Thanks in advance! |
I made a custom exporter that dropped the |
When wrapping an http server with the otelhttp handler, these errors occur and do not stop. The message size continues to grow and never frees:
Full stacktrace (seems to be related to metrics):
How the
otelhttp
handler is being setup:Metric setup:
Is there something I am doing wrong in the setup with otelhttp?
The text was updated successfully, but these errors were encountered: