-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connectd memleak fix #5312
Connectd memleak fix #5312
Conversation
1abeb3f
to
ea98832
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK ea98832
I'm now running my node with this patch (PR through ea98832), and it's looking good. It may be too early to say yet, but I am not seeing the continual run-up in memory usage by |
It's still looking great. I'd call this one licked. |
When we moved gossip filtering to connectd, this aging got lost. Without this, we hit the 10,000 entry limit before expiring full gossip anti-echo cache. This is under 1M in allocations per peer, but in DEVELOPER mode each allocation includes adds 3 notifiers (32 bytes each) and a backtrace child (40 + 40 + 256 bytes), making it almost 10MB per peer, plus allocation overhead. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Changelog-Fixed: connectd: large memory usage with many peers fixed.
channeld is the only user of these functions, since it now streams all gossip itself. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was doing its own equivalent check anyway. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of doing an allocation per entry, put the entry in directly. This means only 30 bit resolution on 32-bit machines, but if a bit of gossip gets accidently suppressed that's ok. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
10,000 per peer was too much. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
e2ec262
to
1807be7
Compare
Ack 1807be7 Removed one overzealous commit, added Changelog message. |
This may not be the only leak, but it's one.
Reported-by: @whitslack