Skip to content

Make max checkpoints configurable #22015

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

nickvikeras
Copy link
Contributor

Description

The 10k limit for in-flight checkpoints is way too high for the analytics indexer. If there is any slowdown in any part of the pipelines we end up having a massive spike from ~2gb to ~60gb memory consumption.

I found we can far better throughput and lower and more predictable memory use if I increase the default batch_size (which is the number of concurrent GCS downloads) from 10 to 1000 (going beyond 1000 just increased memory use without increasing throughput) and limiting the number of checkpoints actually being worked on to ~100. This makes sense to me because:

  • I/O is the primary bottleneck and a single GCP instance can handle way more than 10 concurrent GCS downloads (this was probably different on the old, non-GCS infra). It's fine to have a large number of completed downloads in this buffer, it just costs a predictable and small amount of memory.
  • Throwing ~<num_cpus>*10 compute tasks at the tokio pool is plenty to ensure the CPUs stay busy. Creating more tasks than that is what causes extreme memory explosions when one of the pipelines slows down.

Test plan

This code change is already live and after reducing this limit to 100 the backfill has been stable and fast.


Release notes

Check each box that your changes affect. If none of the boxes relate to your changes, release notes aren't required.

For each box you select, include information after the relevant heading that describes the impact of your changes that a user might notice and any actions they must take to implement updates.

  • Protocol:
  • Nodes (Validators and Full nodes):
  • gRPC:
  • JSON-RPC:
  • GraphQL:
  • CLI:
  • Rust SDK:

Copy link

vercel bot commented May 1, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

2 Skipped Deployments
Name Status Preview Comments Updated (UTC)
multisig-toolkit ⬜️ Ignored (Inspect) Visit Preview May 1, 2025 0:37am
sui-kiosk ⬜️ Ignored (Inspect) Visit Preview May 1, 2025 0:37am

@nickvikeras nickvikeras temporarily deployed to sui-typescript-aws-kms-test-env May 1, 2025 00:37 — with GitHub Actions Inactive
@nickvikeras nickvikeras requested review from phoenix-o and bmwill May 1, 2025 04:28
@phoenix-o
Copy link
Contributor

There is an existing data_limit parameter to better control memory consumption. But I'm ok with making a hard limit configurable

@nickvikeras
Copy link
Contributor Author

nickvikeras commented May 1, 2025

There is an existing data_limit parameter to better control memory consumption. But I'm ok with making a hard limit configurable

I tried to use that but found it difficult to tune correctly. I set it very high (8gb) and found it still caused a big drop in throughput. I didn't spend a lot of time debugging it because I found just reducing this hard limit solved the problem.

@nickvikeras nickvikeras closed this May 1, 2025
@nickvikeras nickvikeras reopened this May 1, 2025
@nickvikeras nickvikeras temporarily deployed to sui-typescript-aws-kms-test-env May 1, 2025 13:40 — with GitHub Actions Inactive
///
/// This is read once at startup and cached. Changing the environment variable at runtime will not
/// have any effect.
pub static MAX_CHECKPOINTS_IN_PROGRESS: Lazy<usize> = Lazy::new(|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we include this as a part of config vs using an env var?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants