Skip to content

Default flow control strategy is not very memory friendly for a cluster with many TiFlash nodes  #49353

Closed
@windtalker

Description

Bug Report

In TiDB, it always set the window size to 1G in Grpc client
https://github.com/tikv/client-go/blob/e80e9ca1fe66c8cc251491d880f2d2645293c1a5/internal/client/client.go#L94-L95
It means before the application(TiDB) consume it, the max buffered message inside grpc core can be 1G for each connection.
Consider a cluster with N TiFlash nodes, and a simple query like

select * from t

will dispatched to all the TiFlash node to running parallelly in mpp mode, and at TiDB side, it will establish N connection to these TiFlash node, and receive data from the N connections. If the comsumer speed is slower than the producer(which is almost always true), the data will be buffered in grpc core, and will take up at most N G, if N is larger, it may easily makes TiDB server OOM.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions