-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zlib inflate() holds and reuses buffer until program exit, is never freed limiting memory for other uses. #45303
Comments
Is your complaint that the memory from this line isn't released until exit? const compressed = compressBuffer(1000 * MEGABYTES); If so, I don't understand why you expect it to be released earlier. |
To follow |
Hi @Neustradamus, did you consider clicking on the |
I'll close this out seeing there's been no response from OP. |
I was out on vacation and then EOY celebrations. Please re-open.
The decompression buffer is never released until process exit, a 1000MB buffer. |
Not sure I follow. That buffer is 1,000 MB, not 3 MB big - edit: oh wait, you're saying it's 3 MB after compression, right? In the "expected behavior" section, you fill out what you expect but not why. Can you elaborate on that? |
I agree the syntax is confusing for The compressBuffer function:
You can see the only memory left in use is 3MB (from the compressed buffer, again, correct) If the original buffer were to be bigger, then the decompressed buffer that can't be freed is bigger. To answer the why: If I want to use that memory for something else after some processing (in my case big PNG inflate and decode) I no longer have that memory available, it's bound to zlib.
I understand decompression takes 2000MB because it needs 1000MB for the chunks list, and then another contiguous 1000MB to unify the chunks from the list after it's done decompressing. But then one of those two is greedily kept by zlib (to be reused later, apparently) but can never be released for other uses. |
(Updated previous comment with log references for clarity) |
Yes
To further elaborate with a concrete example: I'm constrained by memory limited containers 8GB. Please don't forget to re-open! Happy new year! |
@addaleax I think this may be a misreporting bug introduced in (speculatively) ed715ef rather than a real memory leak.
|
Thank you. I don't think it's only a reporting problem. As stated by the second log of the original post, If I soft limit the memory (using |
@bnoordhuis I don’t think this is an accounting error either. By |
Inserting a (And since |
Version
v16.17.1
Platform
Linux 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Subsystem
zlib
What steps will reproduce the bug?
Code snippet (run with
--expose-gc
)How often does it reproduce? Is there a required condition?
Always
What is the expected behavior?
Memory should lower back to near zero after finishing each decompression routine/scope.
What do you see instead?
Memory never lowers back to "zero" until the exit handler, not allowing it's reuse for something else.
It's not a cumulative memory leak, but keeps it reserved forever.
In a system with 2.5 GB memory limit it would crash due to OOM, even if we could fit 2GB easily if that memory held by zlib was released. Try it:
(ulimit -Sd 2500000; node --expose-gc ./script.js )
Script output unbounded
$ node --expose-gc script.js
Script output boudned to 2500 MB (in Ubuntu, YMMV)
( ulimit -Sd 2500000 ; node --expose-gc script.js )
Additional information
This was a related issue I previously raised and closed myself because it wasn't a leak and I was misguided by understanding how zlib worked #44750
But even now understanding the peak memory use would be 2x the inflated buffer (the chunks must be unified in another buffer) it still doesn't explain why it is not released.
I thought to open a new issue because the original title would be misleading. It's not a memory leak per se as it doesn't accumulate, but it nevers releases those buffers that might be needed by other parts of the program.
The text was updated successfully, but these errors were encountered: