Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffers held by protobuf reader/writer use significant memory #322

Closed
whyrusleeping opened this issue Apr 16, 2019 · 7 comments
Closed

Buffers held by protobuf reader/writer use significant memory #322

whyrusleeping opened this issue Apr 16, 2019 · 7 comments

Comments

@whyrusleeping
Copy link
Contributor

I'm noticing the highest usage of memory on mars right now is coming from the gogo protobuf readers and writers (they are first and second highest user of memory respectively).

I can provide the memory profiles here if needed, but this one seems pretty straightforward.

@anacrolix
Copy link
Contributor

Thanks. I can take a look at this one.

@anacrolix anacrolix self-assigned this Apr 16, 2019
@anacrolix
Copy link
Contributor

@whyrusleeping I'm not able to reproduce that specific user, do you want to send through the profiles, here or privately? Is mars running the most recent release?

What I do find is that github.com/libp2p/go-addr-util.ResolveUnspecifiedAddresses is responsible for 37% of allocation by space, and 26% by object count.

(pprof) alloc_space
(pprof) top -cum
Showing nodes accounting for 1.26GB, 0.64% of 197.82GB total
Dropped 1186 nodes (cum <= 0.99GB)
Showing top 10 nodes out of 211
      flat  flat%   sum%        cum   cum%
         0     0%     0%    72.40GB 36.60%  github.com/libp2p/go-libp2p-swarm.(*Swarm).InterfaceListenAddresses
    0.47GB  0.24%  0.24%    72.31GB 36.55%  github.com/libp2p/go-addr-util.ResolveUnspecifiedAddresses
    0.44GB  0.22%  0.46%    57.66GB 29.15%  github.com/libp2p/go-addr-util.InterfaceAddresses
    0.17GB 0.087%  0.54%    57.07GB 28.85%  github.com/multiformats/go-multiaddr-net.InterfaceMultiaddrs
         0     0%  0.54%    55.13GB 27.87%  net.InterfaceAddrs
         0     0%  0.54%    55.13GB 27.87%  net.interfaceAddrTable
         0     0%  0.54%    54.96GB 27.78%  github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).SetStreamHandler.func1
    0.06GB 0.033%  0.58%    52.38GB 26.48%  github.com/libp2p/go-libp2p/p2p/protocol/identify.(*IDService).responseHandler
    0.08GB 0.039%  0.62%    50.12GB 25.33%  github.com/libp2p/go-libp2p/p2p/protocol/identify.(*IDService).consumeMessage
    0.04GB 0.021%  0.64%    49.67GB 25.11%  github.com/libp2p/go-libp2p/p2p/protocol/identify.(*IDService).IdentifyConn

@whyrusleeping
Copy link
Contributor Author

@anacrolix hrm... i've restarted the node since then and can't reproduce. I started using the datastore peerstore stuff and its changed up everything.

@Stebalien
Copy link
Member

The issue is in

switch err := r.ReadMsg(&req); err {
and only shows up when we have a lot of connections. With 10000 connections, a 4k buffer per connection adds up pretty quickly.

@anacrolix
Copy link
Contributor

The largest messages tend to be PUT_VALUE, with a 99th percentile of ~992 bytes. I wonder if the buffers can be reduced further to ~1KB, or some value related to that.

@Stebalien
Copy link
Member

Ideally, we'd use a tiny buffer to read the message size and then a larger buffer to read the actual message.

Stebalien added a commit that referenced this issue May 8, 2019
Allocate them as-needed and use a pool.

Work towards #322.
Stebalien added a commit that referenced this issue May 8, 2019
Allocate them as-needed and use a pool.

Work towards #322.
@Stebalien
Copy link
Member

Half of this has now been taken care of. The next part is to fix readers.

aarshkshah1992 pushed a commit to aarshkshah1992/go-libp2p-kad-dht that referenced this issue Aug 11, 2019
Allocate them as-needed and use a pool.

Work towards libp2p#322.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants