-
Notifications
You must be signed in to change notification settings - Fork 11.5k
Make max checkpoints configurable #22015
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 2 Skipped Deployments
|
There is an existing data_limit parameter to better control memory consumption. But I'm ok with making a hard limit configurable |
I tried to use that but found it difficult to tune correctly. I set it very high (8gb) and found it still caused a big drop in throughput. I didn't spend a lot of time debugging it because I found just reducing this hard limit solved the problem. |
/// | ||
/// This is read once at startup and cached. Changing the environment variable at runtime will not | ||
/// have any effect. | ||
pub static MAX_CHECKPOINTS_IN_PROGRESS: Lazy<usize> = Lazy::new(|| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we include this as a part of config vs using an env var?
Description
The 10k limit for in-flight checkpoints is way too high for the analytics indexer. If there is any slowdown in any part of the pipelines we end up having a massive spike from ~2gb to ~60gb memory consumption.
I found we can far better throughput and lower and more predictable memory use if I increase the default batch_size (which is the number of concurrent GCS downloads) from 10 to 1000 (going beyond 1000 just increased memory use without increasing throughput) and limiting the number of checkpoints actually being worked on to ~100. This makes sense to me because:
~<num_cpus>*10
compute tasks at the tokio pool is plenty to ensure the CPUs stay busy. Creating more tasks than that is what causes extreme memory explosions when one of the pipelines slows down.Test plan
This code change is already live and after reducing this limit to 100 the backfill has been stable and fast.
Release notes
Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required.
For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates.