Open
Description
Hi together,
I am using go-mysql replication feature and after running for roughly 20 hours it already consumes ~53GB of ram.
go tool pprof http://10.10.23.91:6060/debug/pprof/heap:
Fetching profile over HTTP from http://10.10.23.91:6060/debug/pprof/heap
Saved profile in /home/stefan.becker/pprof/pprof.searchd.alloc_objects.alloc_space.inuse_objects.inuse_space.039.pb.gz
File: searchd
Type: inuse_space
Time: May 10, 2020 at 8:04am (CEST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top3
Showing nodes accounting for 72.24GB, 99.40% of 72.67GB total
Dropped 42 nodes (cum <= 0.36GB)
Showing top 3 nodes out of 17
flat flat% sum% cum cum%
53.16GB 73.15% 73.15% 53.18GB 73.18% github.com/siddontang/go-mysql/packet.(*Conn).ReadPacket
11.18GB 15.38% 88.53% 11.18GB 15.38% xxx/memorydb.newBuffer
7.90GB 10.87% 99.40% 7.90GB 10.87% github.com/allegro/bigcache/v2/queue.(*BytesQueue).allocateAdditionalMemory
(pprof)
Using "list ReadPacket" in pprof I get the following:
(pprof) list ReadPacket
Total: 73.75GB
ROUTINE ======================== github.com/siddontang/go-mysql/packet.(*Conn).ReadPacket in /home/stefan.becker/go/pkg/mod/github.com/siddontang/go-mysql@v0.0.0-20200424072754-803944a6e4ea/packet/conn.go
54.25GB 54.26GB (flat, cum) 73.57% of Total
. . 77: return c
. . 78:}
. . 79:
. . 80:func (c *Conn) ReadPacket() ([]byte, error) {
. . 81: // Here we use `sync.Pool` to avoid allocate/destroy buffers frequently.
. 6.05MB 82: buf := c.bufPool.Get()
. . 83: defer c.bufPool.Return(buf)
. . 84:
. 2MB 85: if err := c.ReadPacketTo(buf); err != nil {
. . 86: return nil, errors.Trace(err)
. . 87: } else {
54.25GB 54.25GB 88: result := append([]byte{}, buf.Bytes()...)
. . 89: return result, nil
. . 90: }
. . 91:}
. . 92:
. . 93:func (c *Conn) ReadPacketTo(w io.Writer) error {
My environment:
- go-mysql version: 803944a
- go version: go version go1.14.2 linux/amd64
- debian version: 9.3
- kernel version: Linux sdbm01 4.9.0-4-amd64 The ability to connect to the master host and listen to real-time events #1 SMP Debian 4.9.65-3+deb9u1 (2017-12-23) x86_64 GNU/Linux
Some other friend of mine is running version be37886 and he does not have the same issue. When I compare the 2 versions I see that sync.Pool has been introduced:
To get a feeling about the "replication amount" see a list of my binlogs:
-rw-r----- 1 mysql mysql 101M Mai 10 08:06 mysql-bin.083923
-rw-r----- 1 mysql mysql 109M Mai 10 08:07 mysql-bin.083924
-rw-r----- 1 mysql mysql 101M Mai 10 08:07 mysql-bin.083925
-rw-r----- 1 mysql mysql 101M Mai 10 08:08 mysql-bin.083926
-rw-r----- 1 mysql mysql 101M Mai 10 08:08 mysql-bin.083927
-rw-r----- 1 mysql mysql 101M Mai 10 08:09 mysql-bin.083928
-rw-r----- 1 mysql mysql 101M Mai 10 08:09 mysql-bin.083929
-rw-r----- 1 mysql mysql 101M Mai 10 08:09 mysql-bin.083930
-rw-r----- 1 mysql mysql 109M Mai 10 08:10 mysql-bin.083931
-rw-r----- 1 mysql mysql 101M Mai 10 08:10 mysql-bin.083932
-rw-r----- 1 mysql mysql 107M Mai 10 08:11 mysql-bin.083933
-rw-r----- 1 mysql mysql 102M Mai 10 08:11 mysql-bin.083934
-rw-r----- 1 mysql mysql 101M Mai 10 08:11 mysql-bin.083935
-rw-r----- 1 mysql mysql 109M Mai 10 08:12 mysql-bin.083936
Any idea how to fix this? I know that you use sync.Pool to lower GC pressure of course. A quick fix would probably be to remove sync.Pool but I do not know if this is the right way.
Thanks for your help!
Metadata
Metadata
Assignees
Labels
No labels