Skip to content

Conversation

@sergeytimoshin
Copy link
Contributor

@sergeytimoshin sergeytimoshin commented Jan 21, 2026

Summary by CodeRabbit

  • New Features

    • Compression now covers CTokens, Mints, and PDAs; CLI option to specify compressible PDA programs and startup validation for compressible config; generic subscription model and indexer-backed compression paths.
  • Tests

    • Extensive end-to-end tests for Mint and PDA bootstrap, subscription, and compression; added npm script to run compression-related test suite.
  • Chores

    • Project reorganization, dependency updates, and .gitignore updated to ignore local spawn scripts.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 21, 2026

📝 Walkthrough

Walkthrough

Reorganizes compressible subsystem into CTokens, Mints, and PDAs with new trackers, compressors, bootstrappers, generic subscription/traits, RPC V2 pagination helpers, CLI/config additions for PDA programs, Indexer-bound RPC paths, extensive integration tests, and removal of legacy single-token tracker/bootstrap modules.

Changes

Cohort / File(s) Summary
Build & Tooling
\forester/Cargo.toml`, `forester/package.json`, `forester/.gitignore``
Add deps (hex, light-token-client path dev-dep), new npm script test:compressible, and ignore spawn.sh/spawn_devnet.sh.
CLI & Config
\forester/src/cli.rs`, `forester/src/config.rs``
Add compressible_pda_programs: Vec<String> CLI arg; parse into PdaProgramConfig; require --ws-rpc-url when compressible enabled or PDA programs provided.
Shared RPC & Bootstrap
\forester/src/compressible/bootstrap_helpers.rs``
New RPC utilities: timed requests, JSON extraction, payload builders, localhost detection, getProgramAccountsV2 pagination, and two bootstrap entry points (standard & v2) with shutdown support.
Core config & traits
\forester/src/compressible/config.rs`, `forester/src/compressible/traits.rs`, `forester/src/compressible/validation.rs``
Add constants and PdaProgramConfig; extend CompressibleConfig with pda_programs; introduce CompressibleState, CompressibleTracker, SubscriptionHandler; add validate_compressible_config.
Subscription system
\forester/src/compressible/subscriber.rs``
Replace log/registry subscriber with generic AccountSubscriber<H: SubscriptionHandler>, SubscriptionConfig, MemcmpFilter, reconnect/backoff and name-based logging; decouple from specific trackers.
CToken subsystem
\forester/src/compressible/ctoken/*``
New CTokenAccountTracker, CTokenAccountState, CTokenCompressor, and bootstrap_ctoken_accounts; compressor now requires Indexer and uses indexer-based flows.
Mint subsystem
\forester/src/compressible/mint/*``
New MintAccountTracker, MintAccountState, MintCompressor (batch & concurrent), and bootstrap_mint_accounts.
PDA subsystem
\forester/src/compressible/pda/*``
New PdaAccountTracker, PdaAccountState, PdaCompressor (with cached program config), and bootstrap_pda_accounts; CLI accepts multiple PDA program configs.
Module reorg / removals
\forester/src/compressible/mod.rs`, removed: bootstrap.rs, state.rs, types.rs`
Reorganize compressible into submodules (bootstrap_helpers, ctoken, mint, pda, traits, validation); remove legacy CompressibleAccountTracker, legacy bootstrap/compressor/types.
Epoch & Pipeline wiring
\forester/src/epoch_manager.rs`, `forester/src/lib.rs`, tests referencing `forester/src/cli.rs``
Add Indexer bound to RPC generics; thread CToken/PDA/Mint trackers through pipeline and epoch manager; add dispatch_pda_compression / dispatch_mint_compression; update run_pipeline signature and startup bootstrap/shutdown wiring.
Subscription/bootstrap integration
\forester/src/compressible/pda/bootstrap.rs`, `forester/src/compressible/ctoken/bootstrap.rs`, `forester/src/compressible/mint/bootstrap.rs``
New per-subsystem bootstrap entrypoints using shared helpers; support shutdown signalling and V2 pagination; per-program error isolation.
Compressor implementations
\forester/src/compressible/ctoken/compressor.rs`, `forester/src/compressible/mint/compressor.rs`, `forester/src/compressible/pda/compressor.rs``
New compressors adapted to Indexer paths, concurrency controls, confirmation-based removal of tracked accounts, program-config caching for PDAs, and both batched & concurrent flows.
State & types
\forester/src/compressible/ctoken/state.rs`, `forester/src/compressible/mint/state.rs`, `forester/src/compressible/pda/state.rs`, `forester/src/compressible/*/types.rs``
Add per-subsystem trackers and state types with update_from_account, ready-to-compress selection, and SubscriptionHandler implementations; removed unified CompressibleAccountState/tracker.
Subscriber & tests updates
\forester/src/compressible/subscriber.rs`, `forester/tests/*`, `sdk-libs/client/src/local_test_validator.rs``
Tests and examples updated to add upgradeable_programs and pda_programs fields; many tests adjusted to new run_pipeline generic usage and tracker types; large new integration tests for Mint and PDA flows.
SDK & program-libs
\sdk-libs/`, `program-libs/compressible/``
Remove unused instruction parameter, change RpcError handling to inspect HTTP status, add solana-rent dep and non-panicking rent fallback.

Sequence Diagram(s)

sequenceDiagram
    participant Forester as Forester (bootstrap)
    participant BSH as BootstrapHelpers
    participant RPC as Solana RPC
    participant Tracker as AccountTracker

    Forester->>BSH: bootstrap_standard_api(program_id, filters, process_fn)
    BSH->>RPC: getProgramAccounts / getProgramAccountsV2
    RPC-->>BSH: accounts[]
    loop per account
        BSH->>BSH: extract_account_fields()
        BSH->>Forester: process_fn(RawAccountData)
        Forester->>Tracker: update_from_account(pubkey, data, lamports)
        Tracker->>Tracker: deserialize & compute compressible_slot
        Tracker-->>Tracker: insert/update map
    end
    BSH-->>Forester: (fetched, inserted)
Loading
sequenceDiagram
    participant Compressor as Compressor
    participant Tracker as AccountTracker
    participant Indexer as Indexer/RPC
    participant Tx as TransactionSubmit

    Compressor->>Tracker: get_ready_to_compress(current_slot)
    Tracker-->>Compressor: Vec<AccountState>
    loop per account/batch
        Compressor->>Indexer: fetch compressed account & proof
        Indexer-->>Compressor: compressed data & proof
        Compressor->>Compressor: build compress_and_close instruction(s)
        Compressor->>Tx: send_transaction & await confirmation
        Tx-->>Compressor: signature / confirmation
        alt confirmed
            Compressor->>Tracker: remove(pubkey)
        else failed
            Compressor->>Tracker: retain for retry
        end
    end
    Compressor-->>Compressor: return results
Loading
sequenceDiagram
    participant WS as WebSocket
    participant Sub as AccountSubscriber
    participant Handler as SubscriptionHandler
    participant Tracker as AccountTracker

    WS->>Sub: account update notification
    Sub->>Handler: handle_update(pubkey, program_id, data, lamports)
    Handler->>Tracker: update_from_account(pubkey, program_id, data, lamports)
    Tracker->>Tracker: parse & insert/update state
    Handler-->>Sub: Result<()>
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

ai-review

Suggested reviewers

  • SwenSchaeferjohann

Poem

🌱 Three trackers planted, tidy and grand,
CTokens, Mints, and PDAs take stand.
Bootstraps gather, subscribers hum,
Compressors close — confirmations come. 🎉

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: forester: pda & mint compression' directly and clearly summarizes the primary change: adding PDA and mint compression support to the forester service.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sergey/forester-compressed-pda

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
forester/src/compressible/ctoken/compressor.rs (2)

116-129: Extension lookup logic is duplicated from state.rs.

The code at lines 117-129 that finds the Compressible extension is nearly identical to the pattern in ctoken/state.rs lines 19-28. Consider extracting a shared helper function to reduce duplication.

♻️ Potential helper extraction
// In a shared module like ctoken/utils.rs
pub fn get_compression_info(account: &Token) -> Option<&CompressionInfo> {
    account.extensions.as_ref().and_then(|exts| {
        exts.iter().find_map(|ext| match ext {
            ExtensionStruct::Compressible(comp) => Some(&comp.info),
            _ => None,
        })
    })
}

172-181: Clarify the self_program: None comment.

The comment "Critical: None = no light_system_cpi_authority" is helpful but could be expanded. Why is it critical that this is None? What would happen if it were Some? A brief explanation would help future maintainers.

forester/tests/legacy/address_v2_test.rs (1)

179-185: Update run_pipeline call to match current function signature.

The run_pipeline function now requires 7 parameters, but this test passes only 5. The missing parameters are shutdown_compressible and shutdown_bootstrap. Update the call to:

let service_handle = tokio::spawn(run_pipeline::<LightClient>(
    Arc::from(config.clone()),
    None,
    None,
    shutdown_receiver,
    None, // or Some(shutdown_compressible_receiver) if needed
    None, // or Some(shutdown_bootstrap_receiver) if needed
    work_report_sender,
));

This matches the pattern used in e2e_test.rs and test_batch_append_spent.rs.

sdk-libs/client/src/local_test_validator.rs (1)

31-69: Prefer Command::args over sh -c to avoid injection/quoting bugs. Now that upgradeable programs add more user-provided segments, building a shell string makes path escaping fragile and risks accidental injection. Use Command::new(...).args(...) instead.

♻️ Suggested refactor
-        let path = "cli/test_bin/run test-validator";
-        let mut path = format!("{}/{}", project_root.trim(), path);
+        let runner = format!("{}/cli/test_bin/run", project_root.trim());
+        let mut args: Vec<String> = vec!["test-validator".into()];
         if !config.enable_indexer {
-            path.push_str(" --skip-indexer");
+            args.push("--skip-indexer".into());
         }

         if let Some(limit_ledger_size) = config.limit_ledger_size {
-            path.push_str(&format!(" --limit-ledger-size {}", limit_ledger_size));
+            args.push("--limit-ledger-size".into());
+            args.push(limit_ledger_size.to_string());
         }

         for sbf_program in config.sbf_programs.iter() {
-            path.push_str(&format!(
-                " --sbf-program {} {}",
-                sbf_program.0, sbf_program.1
-            ));
+            args.push("--sbf-program".into());
+            args.push(sbf_program.0.clone());
+            args.push(sbf_program.1.clone());
         }

         for upgradeable_program in config.upgradeable_programs.iter() {
-            path.push_str(&format!(
-                " --upgradeable-program {} {} {}",
-                upgradeable_program.0, upgradeable_program.1, upgradeable_program.2
-            ));
+            args.push("--upgradeable-program".into());
+            args.push(upgradeable_program.0.clone());
+            args.push(upgradeable_program.1.clone());
+            args.push(upgradeable_program.2.clone());
         }

         if !config.enable_prover {
-            path.push_str(" --skip-prover");
+            args.push("--skip-prover".into());
         }

-        println!("Starting validator with command: {}", path);
-
-        let child = Command::new("sh")
-            .arg("-c")
-            .arg(path)
+        println!("Starting validator with command: {} {}", runner, args.join(" "));
+
+        let child = Command::new(&runner)
+            .args(&args)
             .stdin(Stdio::null()) // Detach from stdin
             .stdout(Stdio::null()) // Detach from stdout
             .stderr(Stdio::null()) // Detach from stderr
             .spawn()
             .expect("Failed to start server process");
forester/src/epoch_manager.rs (1)

1791-1938: PDA/Mint compression is gated behind the CToken compression tree.
dispatch_pda_compression/dispatch_mint_compression are only invoked via dispatch_compression, which itself runs only if the compression tree is inserted (currently tied to compressible_tracker). If a node is configured for PDA/Mint compression without a CToken tracker, these new paths won’t execute. Consider inserting the compression tree when any compressible tracker is present.

🔧 Suggested adjustment (in wait_for_active_phase)
-        if self.compressible_tracker.is_some() && self.config.compressible_config.is_some() {
+        if (self.compressible_tracker.is_some()
+            || self.pda_tracker.is_some()
+            || self.mint_tracker.is_some())
+            && self.config.compressible_config.is_some()
+        {
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 26-45: send_rpc_request currently issues requests without any
timeout, causing bootstraps (bootstrap_ctoken_accounts, bootstrap_pda_accounts,
bootstrap_mint_accounts) to hang if an RPC stalls; wrap the entire request+parse
sequence in a tokio::time::timeout(Duration::from_secs(30), ...) so the
.send().await and .json().await are bounded, return a clear timeout Err when
exceeded, and import std::time::Duration and tokio::time::timeout; keep the
timeout duration (30s) consistent with tree_data_sync.rs and ensure errors
distinguish timeout from other request/parse failures in send_rpc_request.

In `@forester/src/compressible/config.rs`:
- Around line 95-97: Add a short doc comment on the pda_programs field
explaining why it uses #[serde(skip)] — state that this field is intentionally
omitted from file-based config and is instead populated from CLI parsing (via
PdaProgramConfig parsing where s.parse::<PdaProgramConfig>() is used in the
config parsing logic), so maintainers understand the design and where values
originate; reference the field name pda_programs, the type PdaProgramConfig, and
the serde(skip) attribute in the comment.

In `@forester/src/compressible/ctoken/compressor.rs`:
- Around line 233-236: The code removes compressed accounts from the tracker
immediately after create_and_send_transaction returns, which risks losing
tracking if the tx is dropped or the process crashes; modify the flow in the
compressor (around create_and_send_transaction and the loop calling
self.tracker.remove(&account_state.pubkey)) to use a two-phase approach: first
mark account_states as "pending" (e.g., set a pending flag or move them to a
pending set in the tracker) before calling create_and_send_transaction, then on
confirmed send completion remove them via tracker.remove; alternatively wait for
on-chain confirmation before calling tracker.remove; update the tracker API or
add a pending state so create_and_send_transaction, account_states, and
tracker.remove are coordinated to avoid immediate removal on send success alone.

In `@forester/src/compressible/ctoken/state.rs`:
- Around line 55-60: The method get_compressible_accounts currently calls
get_ready_to_compress(u64::MAX) which effectively ignores the compressible_slot
filter and returns all accounts before filtering by
state.account.is_token_account(), so rename get_compressible_accounts to
get_all_token_accounts or add a doc comment on get_compressible_accounts
explaining that it intentionally fetches all accounts (ignoring
compressible_slot) and then filters to token accounts; update the function name
(or docs) and any call sites to reflect the clarified intent, referencing
get_ready_to_compress, get_compressible_accounts (or new name), and
is_token_account to locate the logic to change.

In `@forester/src/compressible/mint/compressor.rs`:
- Around line 29-37: The Clone impl for MintCompressor manually clones
payer_keypair using insecure_clone which is correct but undocumented; update the
Clone implementation for MintCompressor (impl<R: Rpc + Indexer> Clone for
MintCompressor<R>) to include a short comment above the payer_keypair:
self.payer_keypair.insecure_clone() line explaining that insecure_clone is
intentionally used because multiple MintCompressor instances need shared signing
authority for concurrent compression and that this duplication is safe in this
context.
- Around line 52-134: compress_batch currently builds instructions for all
MintAccountState entries and sends them in a single transaction (compress_batch,
create_mint_action_instruction), which can exceed Solana's transaction size
limit; change compress_batch to split mint_states into smaller batches (use a
constant like MAX_MINTS_PER_TX or dynamic chunking by measuring instruction
sizes) and for each chunk: build instructions in parallel with the existing
instruction_futures logic, send each chunk as its own
create_and_send_transaction call, handle/propagate per-chunk errors,
collect/return a meaningful result (e.g., last Signature or Vec<Signature>), and
ensure tracker.remove(&mint_state.pubkey) is called only after the corresponding
chunk's transaction succeeds so you don't drop unconfirmed mints.

In `@forester/src/compressible/pda/state.rs`:
- Around line 26-42: The code uses Rent::default() inside
calculate_compressible_slot which can be incorrect in non-standard/test
environments; change calculate_compressible_slot to accept a Rent (e.g., add a
parameter rent: &Rent) and use rent.minimum_balance(account_size) instead of
Rent::default(), then update callers to pass the runtime sysvar Rent when
available (Rent::get() / RPC deserialized rent) or Rent::default() in off-chain
tests; ensure function signature changes propagate to places that call
calculate_compressible_slot and continue to call get_last_funded_epoch with the
computed rent_exemption.

In `@forester/src/compressible/traits.rs`:
- Around line 38-44: The current get_ready_to_compress method clones every ready
account, which can allocate heavily if the tracker contains many accounts;
modify the API to avoid cloning by either (1) providing a get_ready_pubkeys
method that returns Vec<Pubkey> by mapping entry.key() for entries where
entry.value().is_ready_to_compress(current_slot), or (2) returning an iterator
over references (or keyed entries) so callers can decide when to clone; update
callers that expect Vec<S> (references to get_ready_to_compress usages) to use
the new get_ready_pubkeys or iterator variant and adjust downstream logic to
look up or clone only the few accounts actually processed.

In `@forester/src/epoch_manager.rs`:
- Around line 1942-2127: In dispatch_pda_compression and
dispatch_mint_compression ensure work aborts once consecutive_eligibility_end
passes by setting and passing a shared cancellation flag to the compressors and
re-checking eligibility before/while processing batches: create/keep the
Arc<AtomicBool> cancelled (already present) and when current_slot >=
consecutive_eligibility_end call cancelled.store(true, Ordering::Relaxed) (e.g.,
before calling pda_compressor.compress_batch_concurrent and in the loop over
program batches), break out or stop scheduling new batches, and ensure
pda_compressor.compress_batch_concurrent and
mint_compressor.compress_batch_concurrent are invoked with that cancelled flag
and honor it (or process accounts in smaller chunks and re-check current_slot
between chunks), referencing dispatch_pda_compression,
dispatch_mint_compression, pda_compressor.compress_batch_concurrent,
mint_compressor.compress_batch_concurrent, and the cancelled Arc<AtomicBool>.

In `@forester/src/lib.rs`:
- Around line 300-322: The oneshot sender pda_shutdown_tx is dropped immediately
after spawning the task, causing the pda_shutdown_rx to cancel and trigger an
immediate shutdown in compressible::pda::bootstrap_pda_accounts; to fix, either
keep pda_shutdown_tx alive until you actually want to signal shutdown (e.g.,
store it in the outer scope or in a shutdown controller and drop/send when
shutting down) or change bootstrap_pda_accounts to accept
Option<tokio::sync::oneshot::Receiver<()>> and pass None here so the receiver is
not cancelled; update the call site around pda_shutdown_tx/pda_shutdown_rx and
the bootstrap_pda_accounts signature/handling accordingly.
- Around line 353-369: The mint bootstrap is dropping mint_shutdown_tx
immediately after spawning the task, which signals shutdown immediately; stop
dropping it there and instead keep the sender alive until actual shutdown—remove
the immediate drop(mint_shutdown_tx) and either store mint_shutdown_tx alongside
the other shutdown senders (so it can be dropped when you intend to stop
services) or move ownership into the overall shutdown manager; ensure
compressible::mint::bootstrap_mint_accounts receives the mint_shutdown_rx and
only sees the sender dropped when you explicitly trigger shutdown.

In `@forester/tests/test_compressible_mint.rs`:
- Around line 176-186: Add a simple iteration counter inside the existing
polling loop that checks tracker (the while start.elapsed() < timeout loop using
start, timeout, tracker, and sleep) and emit a diagnostic message every N
iterations (e.g., every 20) showing elapsed seconds to aid debugging of slow
bootstrap; increment the counter each loop and leave the existing timeout and
early-exit behavior untouched so the test still breaks when tracker.is_empty()
becomes false.

In `@forester/tests/test_compressible_pda.rs`:
- Around line 588-589: Add an explanatory comment above the
get_ready_to_compress_for_program call: explain that current_slot + 1000 is
intentionally used to simulate a future slot so accounts appear past their
compressible_slot threshold for testing (we can't warp slots in the test
validator). Reference the variables/functions in the comment (current_slot,
get_ready_to_compress_for_program, ready_accounts, compressible_slot) so future
readers know this is deliberate test scaffolding.

In `@sdk-libs/client/src/rpc/errors.rs`:
- Around line 94-98: The rate-limit detection in errors.rs currently treats any
occurrence of "429" in error_str as a match which can yield false positives;
update the is_rate_limited logic to match 429 only when it appears as an HTTP
status token or JSON field (e.g., word boundary or patterns like "status": 429
or "HTTP/<version> 429") rather than any substring—use a precise regex or
explicit checks against error_str/error_str_lower in the same block that sets
is_rate_limited so functions/variables like is_rate_limited, error_str and
error_str_lower are adjusted to only accept true HTTP status occurrences of 429.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
sdk-libs/client/src/rpc/errors.rs (1)

88-104: Rate-limit detection is well-implemented. The logic correctly distinguishes HTTP 429 responses from error codes that coincidentally contain "429" and covers the standard rate-limit signals.

One consistency opportunity: Line 98 uses error_str for the "429" check while other checks use error_str_lower. Since "429" is numeric, both work identically, but using error_str_lower consistently would be marginally cleaner:

Optional consistency improvement
             // Only match "429" when it looks like an HTTP status (not part of error codes)
-            || (error_str.contains("429") && error_str_lower.contains("http"));
+            || (error_str_lower.contains("429") && error_str_lower.contains("http"));

Coverage note: The current patterns ("too many requests", "rate limit", "max usage", "429" with "http") handle standard cases well. If you encounter non-standard phrasing like "throttled" or "quota exceeded" from specific RPC providers in production, the detection can be expanded accordingly.

sdk-libs/program-test/src/compressible.rs (1)

393-393: Move MintActionMetaConfig to light-token-interface to avoid SDK-to-SDK dependency.

The import use light_token::compressed_token::mint_action::MintActionMetaConfig violates the SDK lib dependency guideline. SDK libs should depend only on program-libs, light-prover-client, and external crates—not on other SDK libs.

Per the architectural principle, data structures like MintActionMetaConfig must be isolated in light-token-interface (in program-libs/token-interface/), allowing test utilities and SDKs to import types without creating circular SDK dependencies. Move MintActionMetaConfig to light-token-interface and update this import to use light_token_interface::....

sdk-libs/client/src/local_test_validator.rs (1)

31-69: Refactor command construction to avoid shell injection vulnerability.

String interpolation into a sh -c command is unsafe: program paths with spaces or special shell characters will break the command or enable injection. The upgradeable_programs field expands this risk surface. Build the command with explicit Command::args() instead.

🔧 Suggested safer construction
-        let path = "cli/test_bin/run test-validator";
-        let mut path = format!("{}/{}", project_root.trim(), path);
+        let mut cmd = Command::new(format!(
+            "{}/cli/test_bin/run",
+            project_root.trim()
+        ));
+        cmd.arg("test-validator");
...
-        for sbf_program in config.sbf_programs.iter() {
-            path.push_str(&format!(
-                " --sbf-program {} {}",
-                sbf_program.0, sbf_program.1
-            ));
-        }
+        for sbf_program in config.sbf_programs.iter() {
+            cmd.args([
+                "--sbf-program",
+                &sbf_program.0,
+                &sbf_program.1,
+            ]);
+        }
...
-        for upgradeable_program in config.upgradeable_programs.iter() {
-            path.push_str(&format!(
-                " --upgradeable-program {} {} {}",
-                upgradeable_program.0, upgradeable_program.1, upgradeable_program.2
-            ));
-        }
+        for upgradeable_program in config.upgradeable_programs.iter() {
+            cmd.args([
+                "--upgradeable-program",
+                &upgradeable_program.0,
+                &upgradeable_program.1,
+                &upgradeable_program.2,
+            ]);
+        }
...
-        let child = Command::new("sh")
-            .arg("-c")
-            .arg(path)
+        let child = cmd
             .stdin(Stdio::null())
             .stdout(Stdio::null())
             .stderr(Stdio::null())
             .spawn()
             .expect("Failed to start server process");
forester/src/epoch_manager.rs (1)

1223-1239: Add compression tree when any compression tracker is configured, not just ctokens.

The compression tree (TreeType::Unknown) is only added to the epoch if compressible_tracker is present. If it's absent—even with pda_tracker or mint_tracker configured—the tree never enters the epoch's tree list. Since perform_active_work iterates only over trees in the epoch, the entire compression pipeline is skipped: dispatch_tree_processing is never called, so dispatch_compression never runs, and dispatch_pda_compression and dispatch_mint_compression are unreachable.

The fix is correct: gate the tree addition on any compression tracker. Both dispatch_pda_compression (line 1942) and dispatch_mint_compression (line 2062) already gracefully handle missing trackers, so once the tree is in the epoch, all three compression paths work as intended.

🛠️ Suggested fix
-        if self.compressible_tracker.is_some() && self.config.compressible_config.is_some() {
+        let any_compression =
+            self.compressible_tracker.is_some()
+                || self.pda_tracker.is_some()
+                || self.mint_tracker.is_some();
+        if any_compression && self.config.compressible_config.is_some() {
             ...
         }
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 287-292: The code currently calls build_v2_api_payload(program_id,
page_count as i32, cursor.as_deref(), filters.clone()) inside the pagination
loop, causing filters to be cloned on every iteration; move the invariant
payload construction out of the loop (or change build_v2_api_payload to accept a
borrowed slice like &[serde_json::Value] so you can pass filters.as_slice()) and
only update the cursor/page-specific fields per iteration (e.g., reuse the
prebuilt payload and set the cursor/page_count field before each request) to
avoid repeated allocations while keeping build_v2_api_payload and filters
semantics intact.
- Around line 25-56: send_rpc_request currently uses the provided
reqwest::Client without any timeout, risking hangs; update the call site or this
function to enforce a request timeout by either (a) ensuring the supplied
reqwest::Client is built with a timeout via
reqwest::Client::builder().timeout(Duration::from_secs(...)).build(), or (b)
applying a per-request timeout on the RequestBuilder in send_rpc_request (the
chain starting at client.post(rpc_url)) using
.timeout(Duration::from_secs(...))). Keep the existing error mapping
(map_err(...)) so timeout errors are converted to the same anyhow::Error flow
and ensure the timeout duration is configurable or documented.
- Around line 211-214: is_localhost currently only checks for "localhost" and
"127.0.0.1" and therefore misses IPv6 loopback addresses; update is_localhost to
extract the host part of rpc_url and parse it as an IpAddr (using
std::net::IpAddr::from_str or parsing a SocketAddr/Url host) and then return
ip.is_loopback(), falling back to string checks for "localhost" if parsing fails
so the function name is_localhost covers IPv4, IPv6 (e.g., ::1 and [::1]) and
hostname cases.

In `@forester/src/compressible/config.rs`:
- Around line 74-76: The error message for building the 8-byte discriminator
uses parts[1] (hex string) instead of the actual byte length; update the logic
around disc_bytes.try_into() in config.rs so you compute and capture
disc_bytes.len() (or store it in a variable like disc_len) before calling
try_into(), then use that disc_len in the map_err message (e.g., "Discriminator
must be exactly 8 bytes, got {disc_len}") when try_into() fails; reference the
discriminator variable, disc_bytes, parts[1], and the try_into() call to locate
and change the map_err content.

In `@forester/src/compressible/ctoken/bootstrap.rs`:
- Around line 49-80: The closure process_account currently deserializes Token
via Token::try_from_slice into ctoken and then calls tracker.update_from_account
which re-deserializes the same bytes; either add a new tracker method (e.g.,
tracker.update_from_token(pubkey, ctoken, lamports) or
tracker.update_from_account_with_token) that accepts the already-deserialized
Token to avoid double deserialization and use that here, or keep the existing
tracker method but update the debug message that currently says "without
compressible config" to accurately reflect the check performed
(is_token_account()). Ensure to update references to ctoken,
tracker.update_from_account, and is_token_account accordingly.

In `@forester/src/compressible/ctoken/compressor.rs`:
- Around line 88-94: The call to rpc.get_latest_active_state_trees() currently
discards its successful result which is confusing since
get_random_state_tree_info() depends on the internal cache; either (A) capture
and use the returned value (e.g., assign the result of
get_latest_active_state_trees().await to a variable and then select from it or
pass it along to avoid implicit cache coupling) or (B) if the RPC is intended
only to refresh an internal cache, make that explicit by changing the error
message to mention "refresh state tree cache" and add a short inline comment
above the call describing that get_random_state_tree_info() reads from the
cached data populated by get_latest_active_state_trees(); update references to
get_latest_active_state_trees, get_random_state_tree_info, and rpc in
compressor.rs accordingly.

In `@forester/src/compressible/ctoken/state.rs`:
- Around line 55-60: The method get_compressible_accounts currently calls
get_ready_to_compress(u64::MAX), which effectively returns all tracked token
accounts rather than those ready by slot; either rename the method to
get_all_token_accounts to reflect that it returns every tracked token account
regardless of readiness, or add a clear doc comment on get_compressible_accounts
explaining that it intentionally uses u64::MAX to bypass slot checks and returns
all token accounts; update any callers or tests that assume "compressible" means
slot-ready to match the new name or documented semantics.

In `@forester/src/compressible/mint/compressor.rs`:
- Around line 52-134: The compress_batch function can exceed Solana transaction
size limits by packing all CompressAndCloseMint instructions into one tx; modify
compress_batch to chunk mint_states into a conservative MAX_MINTS_PER_TX (e.g.
2) and for each chunk build instructions (reuse the existing instruction_futures
logic) and call rpc.create_and_send_transaction per chunk, removing tracker
entries for mints in each successfully sent chunk; keep the same error mapping
for each transaction send and return either the last signature or change the
function to return Vec<Signature> if callers need all signatures, referencing
compress_batch, MintAccountState, create_mint_action_instruction, and
create_and_send_transaction to locate the code to change.

In `@forester/src/compressible/mint/state.rs`:
- Around line 16-29: The calculate_compressible_slot function currently uses
Rent::default() which may not match on-chain rent; update the function to accept
or fetch the actual Rent (e.g., pass a Rent parameter or load the rent sysvar
and use that instead of Rent::default()) and use that rent to compute
rent_exemption passed into get_last_funded_epoch (reference:
calculate_compressible_slot, Rent::default, get_last_funded_epoch,
compression_info.rent_config). Also add a short comment clarifying the
epoch-to-slot conversion choice next to the return using SLOTS_PER_EPOCH (state
that it intentionally returns the first slot of last_funded_epoch for a
conservative compression decision, or change to (last_funded_epoch + 1) *
SLOTS_PER_EPOCH - 1 if you want the last slot of that epoch).

In `@forester/src/compressible/pda/bootstrap.rs`:
- Around line 84-95: The closure process_account currently captures tracker
(&PdaAccountTracker) and program_id (&Pubkey) by reference and is passed into
bootstrap_standard_api/bootstrap_v2_api from bootstrap_program; to make the
borrow lifetime explicit and avoid future refactor pitfalls, copy or clone the
Pubkey into the closure (e.g., assign let program_id = *program_id or
program_id.clone() before defining process_account) so the closure owns
program_id while leaving tracker as a reference, ensuring the FnMut passed into
bootstrap_* does not rely on a borrowed program_id.

In `@forester/src/compressible/pda/compressor.rs`:
- Around line 81-96: Replace the explicit call to
LightConfig::derive_pda(program_id, 0) with the helper
LightConfig::derive_default_pda() to make the intent that config_bump == 0
explicit; update the binding used later (config_pda) to come from
LightConfig::derive_default_pda(), leaving the subsequent RPC get_account and
LightConfig::try_from_slice logic unchanged so behavior is identical but
clearer.

In `@forester/src/compressible/pda/state.rs`:
- Around line 109-112: The debug log message inside the branch checking
compression_info.is_compressed() is misleading; update the message used in the
debug! call (where compression_info.is_compressed() and pubkey are available) to
clearly state the observed state and why we skip processing (e.g., "Account
{pubkey} is already compressed; skipping re-compression" or "Account {pubkey}
remains compressed; no action required") so the log accurately reflects that the
account is currently compressed and that the handler is intentionally skipping
it. Ensure you change only the debug! string and keep the conditional and return
logic intact.
- Line 31: The code uses Rent::default().minimum_balance(account_size) in
calculate_compressible_slot which hardcodes mainnet rent; replace this with the
light_compressible rent helper by calling
get_rent_exemption_lamports(account_size as u64) (or unwrap/handle its Result
appropriately in the forester context) so the function uses real on-chain or
test env rent values; update any variable name (rent_exemption) usage
accordingly and remove the Rent::default() call.
- Around line 26-42: The function calculate_compressible_slot currently returns
the start slot of the last funded epoch, making accounts compressible one epoch
too early; change the return to the start slot of the epoch after the last
funded one by using (last_funded_epoch + 1) * SLOTS_PER_EPOCH (ensure you handle
the u64 multiplication/cast correctly when using get_last_funded_epoch,
CompressionInfo, and SLOTS_PER_EPOCH so you don't change other logic).

In `@forester/src/config.rs`:
- Around line 365-395: The error returned when ws_rpc_url is None uses the wrong
field name ("enable_compressible"); update the ConfigError::InvalidArguments
constructed in that None branch to use field: "ws_rpc_url" (instead of
"enable_compressible") so the error accurately reflects the missing argument;
locate the None arm handling args.ws_rpc_url in the block that builds
compressible_config (references: args.compressible_pda_programs,
args.enable_compressible, ConfigError::InvalidArguments) and change the field
string to "ws_rpc_url".

In `@forester/src/lib.rs`:
- Around line 353-369: The mint bootstrap is currently uninterruptible because
you create a oneshot sender (mint_shutdown_tx) and immediately drop it;
implement a shared helper (e.g., create_bootstrap_shutdown) that takes a
&broadcast::Receiver<()> and returns a oneshot::Receiver<()> by resubscribing to
the broadcast, creating a oneshot channel, and spawning a small task that
forwards rx.recv().await into tx.send(()). Replace the current
mint_shutdown_tx/mint_shutdown_rx creation and immediate drop with a call to
create_bootstrap_shutdown(shutdown_rx) and pass that returned oneshot receiver
into compressible::mint::bootstrap_mint_accounts so the task can be cleanly
interrupted.

In `@forester/tests/test_compressible_pda.rs`:
- Around line 227-236: Add an explicit runtime check to ensure PAYER_PUBKEY
matches the public key derived from PAYER_KEYPAIR: in the test setup (e.g.,
before using the payer in tests) call Keypair::from_bytes(&PAYER_KEYPAIR) to
build a Keypair, derive its Pubkey, parse PAYER_PUBKEY with Pubkey::from_str,
and assert equality (panic with a clear message if they differ); alternatively
replace the string constant by deriving the pubkey once from the keypair and
using that derived value so PAYER_PUBKEY and PAYER_KEYPAIR cannot drift (refer
to PAYER_PUBKEY and PAYER_KEYPAIR to locate where to add the check).
- Around line 620-631: Replace the raw "non-empty bytes" check with a Borsh
deserialization and equality assertion: take the bytes at
compressed_after.data.as_ref().unwrap().data, Borsh-deserialize them into the
expected type (e.g., SinglePubkeyRecord) and use a single
assert_eq!(deserialized, expected_record) instead of the current assert; update
the test around get_compressed_account/compressed_after to construct the
expected_record and compare via Borsh to validate exact account state.

@sergeytimoshin sergeytimoshin force-pushed the sergey/forester-compressed-pda branch from cf402ce to 02d2215 Compare January 21, 2026 14:16
@Lightprotocol Lightprotocol deleted a comment from coderabbitai bot Jan 21, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@forester/package.json`:
- Around line 7-9: Update the "test:compressible" npm script so TEST_MODE is
explicitly pinned to local (add TEST_MODE=local at the start of the env vars) to
ensure compressible tests run in local mode; modify the "test:compressible"
command string (the script named test:compressible in package.json) to include
TEST_MODE=local alongside the existing RUST_LOG and cargo build/test
invocations.

In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 126-161: get_current_slot currently parses the RPC response
directly and ignores HTTP status and RPC-level errors, producing a generic
"extract slot" error; replace its ad-hoc request flow with the existing
send_rpc_request helper to get consistent timeout, status and RPC error
handling: build the same payload, call send_rpc_request(client, rpc_url,
payload) (or the appropriate send_rpc_request signature), unwrap the returned
serde_json::Value, then extract "result" as u64 and return it (return a clear
error if missing); remove the manual timeout/HTTP send/parse code and ensure
RPC_REQUEST_TIMEOUT usage remains centralized in send_rpc_request.

In `@forester/src/compressible/config.rs`:
- Around line 35-38: The docstring claiming CLI format uses
"program_id:discriminator_hex" is wrong because the parser (FromStr
implementation and serde deserialization) expects base58; update the docstring
in compressible/config.rs to state the CLI accepts
"program_id:discriminator_base58" (or "base58-encoded discriminator") and adjust
any inline examples to use base58; also review the FromStr/serde code paths
(FromStr implementation and serde derive/deserialize behavior) to ensure they
remain consistent with the docstring and update tests/examples if present.

In `@forester/src/compressible/mint/bootstrap.rs`:
- Line 37: The client is created with reqwest::Client::new() which uses
defaults; replace it with a configured builder to set sensible timeouts (e.g.,
connect_timeout and request timeout) so the RPC calls can't hang indefinitely:
use reqwest::Client::builder() with Duration values (via std::time::Duration)
for connect_timeout(...) and timeout(...), then .build() and assign to the
existing client variable; ensure error handling if build() returns an Err and
adjust any call sites expecting the old client accordingly.

In `@forester/src/compressible/mint/compressor.rs`:
- Around line 128-145: compress_batch currently returns Ok(signature) even when
rpc.confirm_transaction(signature) yields false; change compress_batch (the
function handling the batched CompressAndCloseMint flow) to return an Err when
confirmation is false instead of Ok(signature). Keep the existing behavior of
only removing mint_state.pubkey from self.tracker when confirmed, and when
confirmed still log the info and return Ok(signature); when not confirmed, log
the warning and return an anyhow::anyhow! error (including the signature for
context) so callers get an error like "Batch CompressAndCloseMint tx not
confirmed: {signature}" and can trigger retry logic.
- Around line 155-187: The function compress_batch_concurrent must guard against
max_concurrent == 0 to avoid futures::stream::StreamExt::buffer_unordered
panicking; add an early precondition in compress_batch_concurrent that checks if
max_concurrent == 0 and immediately returns a Vec of Err entries (one per
mint_state) containing an anyhow::anyhow!("max_concurrent must be > 0") error so
the caller is signalled instead of panicking; place this check before creating
compression_futures and reference the function name compress_batch_concurrent
and the buffer_unordered usage when locating where to add the guard.

In `@forester/tests/test_compressible_mint.rs`:
- Around line 57-69: The comment is inconsistent with the test value:
CreateMintParams is created with rent_payment = 0 but the comment says
"auto-decompresses (rent_payment > 0)"; update the comment to match the actual
test intent (e.g., state that rent_payment = 0 makes the mint immediately
compressible / does not auto-decompress) so the behavior described next to
CreateMintParams and the rent_payment field is accurate.
- Around line 142-148: Replace the current multi-assert check that deserializes
the mint and asserts mint.metadata.mint_decompressed with a single assert_eq!
against an expected Mint instance: deserialize the account via
Mint::deserialize(&mut &mint_data.data[..]) (as in the existing code), construct
an expected Mint value (copying or building all deterministic fields and
omitting or normalizing runtime-specific fields such as timestamps, addresses,
or rent-related fields), then compare the deserialized mint to that expected
value with one assert_eq!(mint, expected_mint); update the same pattern for the
other occurrence around lines 270-275 (where the same multi-assert logic is
used).

In `@program-libs/compressible/src/rent/mod.rs`:
- Around line 25-28: The non-Solana fallback currently returns a size-dependent
value via solana_rent::Rent::default().minimum_balance(_num_bytes as usize)
which conflicts with the documented fixed mock in SOLANA_RENT.md; update the
non-Solana branch in mod.rs to return the documented fixed mock of 2_282_880
lamports instead of calling Rent::default().minimum_balance, ensuring the
function that currently returns
Ok(solana_rent::Rent::default().minimum_balance(...)) (referenced by
minimum_balance and Rent::default) returns Ok(2_282_880) (with the correct
integer type) so tests expecting the fixed mock match the documentation.
♻️ Duplicate comments (6)
forester/src/compressible/bootstrap_helpers.rs (1)

237-239: IPv6 loopback still not detected.

Same as the earlier review: is_localhost doesn’t cover [::1].

forester/src/lib.rs (1)

241-355: Inconsistent shutdown_bootstrap usage across bootstraps.
shutdown_bootstrap is honored for ctokens but PDAs/Mints always run to completion (None). If the shutdown signal is meant to cancel all bootstraps, consider fanning it out (e.g., broadcast → per-task oneshot) or document why ctokens alone are interruptible.

forester/src/compressible/mint/compressor.rs (1)

52-120: Batching all mint compressions into one transaction risks size-limit failures.
Packing many CompressAndCloseMint instructions into a single transaction can exceed transaction size caps and fail the whole batch. Consider chunking or enforcing a conservative per-tx cap.

🔧 Conservative guard (example)
     pub async fn compress_batch(&self, mint_states: &[MintAccountState]) -> Result<Signature> {
         if mint_states.is_empty() {
             return Err(anyhow::anyhow!("No mints to compress"));
         }
+        const MAX_MINTS_PER_TX: usize = 2;
+        if mint_states.len() > MAX_MINTS_PER_TX {
+            return Err(anyhow::anyhow!(
+                "Too many mints ({}) for one transaction; max {}",
+                mint_states.len(),
+                MAX_MINTS_PER_TX
+            ));
+        }
forester/src/compressible/pda/state.rs (1)

28-44: Fix compressible slot boundary (off-by-one).
get_last_funded_epoch returns the last paid epoch; using last_funded_epoch * SLOTS_PER_EPOCH makes the account compressible at the start of that epoch instead of the next one. This makes compression eligibility one epoch early.

🐛 Suggested fix
-    Ok(last_funded_epoch * SLOTS_PER_EPOCH)
+    Ok(last_funded_epoch.saturating_add(1) * SLOTS_PER_EPOCH)

Based on learnings, compressibility starts in the epoch after the last funded one.

forester/src/compressible/pda/compressor.rs (1)

81-83: Use derive_default_pda() to make the bump=0 constraint explicit.
This avoids “magic 0” and documents intent. Please confirm the helper exists in your SDK version.

♻️ Suggested refactor
-        let (config_pda, _) = LightConfig::derive_pda(program_id, 0);
+        let (config_pda, _) = LightConfig::derive_default_pda(program_id);

Run this to confirm availability:

#!/bin/bash
rg -n "derive_default_pda" -g '*.rs'
forester/src/epoch_manager.rs (1)

1942-2127: Re-check eligibility during long PDA/Mint compression runs.
dispatch_pda_compression / dispatch_mint_compression only gate on consecutive_eligibility_end before starting. If the window closes mid-run, tasks can still send transactions because cancelled is never flipped. Either (a) set cancelled once the slot crosses the end, or (b) ensure PdaCompressor::compress_batch_concurrent / MintCompressor::compress_batch_concurrent enforce the window internally.

Please confirm with the compressor implementations:

#!/bin/bash
rg -n "compress_batch_concurrent" forester/src/compressible -g '*.rs' -C3
rg -n "eligibility|consecutive_eligibility_end|estimated_current_slot|cancelled" forester/src/compressible/pda -g '*.rs' -C3
rg -n "eligibility|consecutive_eligibility_end|estimated_current_slot|cancelled" forester/src/compressible/mint -g '*.rs' -C3

@ananas-block
Copy link
Contributor

ananas-block commented Jan 21, 2026

WebSocket Subscriber Has No Reconnection Logic

Location: forester/src/compressible/subscriber.rs:107-174

Issue: When the WebSocket subscription stream closes unexpectedly (line 157-161), the function returns an error and terminates permanently. There is no reconnection logic.

// Line 157-161
None => {
    error\!("{} subscription stream closed unexpectedly", self.config.name);
    unsubscribe().await;
    return Err(anyhow\!("{} subscription stream closed", self.config.name));
}

Production Impact:

  • Temporary network issues cause permanent subscription loss
  • Forester stops tracking new accounts after any WebSocket disconnect
  • Service degradation without manual intervention
  • RPC node maintenance or restarts will break the forester

Recommended Fix: Implement exponential backoff reconnection:

pub async fn run_with_reconnect(&self) -> Result<()> {
    let mut backoff = Duration::from_secs(1);
    const MAX_BACKOFF: Duration = Duration::from_secs(60);
    
    loop {
        match self.run().await {
            Ok(()) => break Ok(()),
            Err(e) if self.shutdown.load(Ordering::Relaxed) => break Err(e),
            Err(e) => {
                warn\!("Subscription failed: {}, reconnecting in {:?}", e, backoff);
                tokio::time::sleep(backoff).await;
                backoff = std::cmp::min(backoff * 2, MAX_BACKOFF);
            }
        }
    }
}

Other findings from the review (non-blocking):

  • 4 false positives identified (DashMap race conditions, pagination loops, slot comparison - all correct)
  • CToken/Mint/PDA modules are production-ready with proper confirmation handling
  • Full report available at .claude/logic-review-compressible-20260121-report.md

The rest of the implementation is solid with good error handling patterns and proper use of concurrent data structures.

@ananas-block
Copy link
Contributor

Startup Validation Recommendations

The forester could benefit from validating configs at startup to fail fast on misconfigurations.

For CTokens/CMints

Currently the forester only derives the CompressibleConfig PDA address but never fetches the account to check its state:

// ctoken/compressor.rs:80-81 - Only derives address, never reads account
let (compressible_config, _) = CompressibleConfig::derive_v1_config_pda(&registry_program_id);

Recommendation: Fetch CompressibleConfig from registry at startup and verify it's not paused:

// state: 0=Inactive (paused), 1=Active, 2=Deprecated
if config.state == 0 {
    return Err("CompressibleConfig is paused (Inactive)");
}
// Active (1) and Deprecated (2) are both OK for compression

For PDAs

Currently uses try_from_slice directly without calling the existing validate() method:

// pda/compressor.rs:95-96 - Deserializes but doesn't validate
let config = LightConfig::try_from_slice(&account.data)?;

Recommendation: Call config.validate() after deserialization, which checks:

  • version == 1
  • address_space.len() == 1 (currently fails at runtime on line 100-103 if empty)
  • config_bump == 0

This would catch misconfigured programs at startup rather than silently skipping them during compression.

@ananas-block
Copy link
Contributor

Minor: Redundant Tracker Removal

compress_batch_concurrent removes accounts from tracker before returning, then epoch_manager removes them again:

PDA:

  • pda/compressor.rs:172-174 - first removal
  • epoch_manager.rs:2043 - second removal (no-op)

Mint:

  • mint/compressor.rs:189-192 - first removal
  • epoch_manager.rs:2115 - second removal (no-op)

Not dangerous (DashMap is idempotent), but redundant. Consider removing the epoch_manager removals since compressor already handles cleanup.

Copy link
Contributor

@ananas-block ananas-block left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apart from the comments lgtm

@sergeytimoshin sergeytimoshin force-pushed the sergey/forester-compressed-pda branch from f19f802 to a31d895 Compare January 22, 2026 09:28
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
program-libs/compressible/Cargo.toml (1)

9-11: Gate solana-rent to the solana feature like other Solana-only dependencies.

Currently solana-rent is unconditionally included (line 37), but it's only used as a fallback in non-on-chain test environments (when neither pinocchio nor solana features are active). This duplicates Solana rent logic unnecessarily and violates the feature-gating pattern already applied to solana-sysvar and solana-msg. Make it optional and add it to the solana feature to keep dependency trees clean across pinocchio and test-only builds.

Proposed Cargo.toml fix
 [features]
 default = ["solana"]
-solana = ["dep:solana-program-error", "light-compressed-account/solana", "light-account-checks/solana", "solana-sysvar", "solana-msg"]
+solana = ["dep:solana-program-error", "light-compressed-account/solana", "light-account-checks/solana", "solana-sysvar", "solana-msg", "dep:solana-rent"]

 [dependencies]
-solana-rent = { workspace = true }
+solana-rent = { workspace = true, optional = true }
forester/tests/test_compressible_ctoken.rs (2)

251-270: Avoid flaky assertions by waiting for subscription-driven tracker updates.

The tracker is updated asynchronously via WebSocket; asserting immediately after account creation/airdrop can race and intermittently fail. Prefer a short poll loop before asserting size/lamports.

✅ Suggested stabilization
-    // Verify tracker has the account
-    assert_eq!(tracker.len(), 1, "Tracker should have 1 account");
-    let accounts = tracker.get_all_token_accounts();
+    // Wait for subscriber to pick up the account
+    let start = tokio::time::Instant::now();
+    let timeout = Duration::from_secs(10);
+    while start.elapsed() < timeout && tracker.len() < 1 {
+        sleep(Duration::from_millis(200)).await;
+    }
+    assert_eq!(tracker.len(), 1, "Tracker should have 1 account");
+    let accounts = tracker.get_all_token_accounts();
@@
-    let accounts = tracker.get_all_token_accounts();
-    assert_eq!(accounts[0].lamports, lamports + 10_000_000);
+    let start = tokio::time::Instant::now();
+    while start.elapsed() < timeout {
+        let accounts = tracker.get_all_token_accounts();
+        if accounts[0].lamports == lamports + 10_000_000 {
+            break;
+        }
+        sleep(Duration::from_millis(200)).await;
+    }
+    let accounts = tracker.get_all_token_accounts();
+    assert_eq!(accounts[0].lamports, lamports + 10_000_000);

301-325: Don’t abort the compression task—await it to avoid canceling work mid-flight.

abort() can stop the compress before it completes, making the test flaky. Await the task and surface errors instead.

✅ Safer task handling
-    let compressor_handle = tokio::spawn(async move {
-        compressor
-            .compress_batch(&ready_accounts, registered_forester_pda)
-            .await
-    });
-    sleep(Duration::from_millis(2000)).await;
+    let compressor_handle = tokio::spawn(async move {
+        compressor
+            .compress_batch(&ready_accounts, registered_forester_pda)
+            .await
+    });
@@
-    compressor_handle.abort();
+    let _sig = compressor_handle
+        .await
+        .expect("Compression task panicked")
+        .expect("Compression failed");
🤖 Fix all issues with AI agents
In `@forester/.gitignore`:
- Around line 8-9: The repo ignores spawn.sh and spawn_devnet.sh but lacks
example templates or docs; either add spawn.sh.example and
spawn_devnet.sh.example (matching the .env.example pattern) with required
variables and usage comments, or update README and CLAUDE.md to document the
purpose, required configuration, and whether these scripts are user-created or
generated; ensure the .gitignore entry stays in sync with the chosen approach
and reference the exact filenames spawn.sh and spawn_devnet.sh in the
docs/templates.

In `@forester/src/compressible/ctoken/bootstrap.rs`:
- Around line 43-46: The memcmp filter in the filters Vec for decompressed
CToken accounts omits an explicit "encoding", causing inconsistency with the
mint bootstrap; update the filters construction that uses ACCOUNT_TYPE_OFFSET
and CTOKEN_ACCOUNT_TYPE_FILTER (the filters variable in bootstrap.rs) to include
"encoding": "base58" alongside the "memcmp" object so the memcmp filter
explicitly specifies base58 encoding for clarity and consistency.

In `@forester/src/compressible/pda/bootstrap.rs`:
- Line 42: The HTTP client is created with reqwest::Client::new() without
timeout configuration; update the creation of the client variable to use
reqwest::Client::builder() and set a sensible timeout (e.g.,
Duration::from_secs(30) or the same value used by other bootstrap modules) via
.timeout(...).then .build() and handle the Result (or unwrap/expect consistently
with surrounding code); also add the necessary use of std::time::Duration and
ensure the symbol client is replaced where referenced.
- Around line 99-106: The memcmp filter creation uses discriminator_base58 and
filters built from program_config.discriminator but omits the encoding; update
the JSON filter inside the filters variable (where discriminator_base58 is used)
to include "encoding": "base58" alongside "offset": 0 and "bytes":
discriminator_base58 so the filter is explicit about base58 encoding for
Anchor-style accounts.

In `@forester/src/compressible/subscriber.rs`:
- Around line 153-179: The reconnect loop is dropping the connection error by
matching Err(_) so the root cause is lost; modify the match on
self.run_connection().await to capture the error (e.g., Err(e)) and include it
in the log—update the branch that handles Ok(ConnectionResult::StreamClosed) |
Err(e) to log the actual error (using warn! or error!) along with subscriber
name, attempt and current_delay before performing the backoff and shutdown_rx
select; keep existing behavior for ConnectionResult::Shutdown and continue using
self.run_connection(), ConnectionResult::StreamClosed, attempt, current_delay,
and reconnect_config identifiers.

In `@forester/src/compressible/validation.rs`:
- Around line 46-48: The code currently calls
OnChainCompressibleConfig::deserialize which bypasses the account discriminator
check; replace this with the Anchor AccountDeserialize::try_deserialize flow
(i.e., call OnChainCompressibleConfig::try_deserialize or the type's
AccountDeserialize::try_deserialize method) so the stored discriminator for
CompressibleConfig is validated before parsing the account data and return the
mapped anyhow error on failure.

In `@forester/src/lib.rs`:
- Line 144: The new signature for run_pipeline now requires one generic type R:
Rpc + Indexer, so update legacy test invocations (e.g.,
run_pipeline::<LightClient, TestIndexer> and run_pipeline::<LightClient,
PhotonIndexer> in the legacy e2e and batched_* tests) to supply a single
concrete type that implements both Rpc and Indexer (for example use
run_pipeline::<LightClient>), or alternatively implement Rpc for
TestIndexer/PhotonIndexer so they satisfy the combined bound; ensure the updated
call sites use the single-type form run_pipeline::<T> where T implements both
Rpc and Indexer (or refactor tests to construct/compose a wrapper type that
implements both traits) so compilation succeeds.

In `@sdk-libs/client/src/rpc/errors.rs`:
- Around line 83-97: The code currently only treats HTTP 429 from
ErrorKind::Reqwest as rate-limited; add a branch in the same decision block
(after the transaction check and alongside the Reqwest check in errors.rs inside
the function returning RpcError) that inspects the RPC response error variant
(e.g., RpcResponseError) on e.kind and returns RpcError::RateLimited when the
RPC error code equals -32005 or the error message contains "too many requests"
(case-insensitive), so these JSON-RPC style rate-limit responses are classified
as RateLimited instead of ClientError.
♻️ Duplicate comments (14)
program-libs/compressible/src/rent/mod.rs (1)

25-28: Non-Solana fallback still uses dynamic calculation; documentation specifies fixed mock.

The previous review flagged this exact discrepancy: your SOLANA_RENT.md documentation states that non-Solana test environments should return a fixed 2,282,880 lamports, but this implementation computes a size-dependent value via Rent::default().minimum_balance(_num_bytes).

Based on retrieved learnings, the documented behavior is: "Provide mock Rent sysvar returns (2,282,880 lamports) in non-Solana runtime test environments."

The maintainer agreed this should be addressed. To align with documentation:

🔧 Proposed fix to return the documented fixed value
 #[cfg(not(all(target_os = "solana", any(feature = "pinocchio", feature = "solana"))))]
 {
-    Ok(solana_rent::Rent::default().minimum_balance(_num_bytes as usize))
+    // Fixed mock value per SOLANA_RENT.md for non-Solana test environments
+    Ok(2_282_880)
 }

Alternatively, if the dynamic approach is now preferred, please update SOLANA_RENT.md to reflect this behavioral change so tests relying on the documented fixed value are updated accordingly.

forester/package.json (1)

8-8: Pin TEST_MODE=local in the compressible test script.
Helps avoid accidental devnet/remote dependencies during CI or local runs.

♻️ Proposed change
-    "test:compressible": "cargo build-sbf -- -p csdk-anchor-full-derived-test && RUST_LOG=forester=debug,light_client=debug cargo test --package forester --test test_compressible_pda --test test_compressible_mint --test test_compressible_ctoken -- --nocapture",
+    "test:compressible": "cargo build-sbf -- -p csdk-anchor-full-derived-test && TEST_MODE=local RUST_LOG=forester=debug,light_client=debug cargo test --package forester --test test_compressible_pda --test test_compressible_mint --test test_compressible_ctoken -- --nocapture",
Based on learnings, keep TEST_MODE explicit for forester tests.
forester/src/compressible/bootstrap_helpers.rs (2)

126-161: Use send_rpc_request for slot fetch to keep error handling consistent.
The current implementation skips HTTP status and RPC error checks and duplicates timeout logic.

♻️ Suggested refactor
 pub async fn get_current_slot(client: &reqwest::Client, rpc_url: &str) -> Result<u64> {
     let payload = json!({
         "jsonrpc": "2.0",
         "id": 0,
         "method": "getSlot",
         "params": [{"commitment": "confirmed"}]
     });
-
-    let result = timeout(RPC_REQUEST_TIMEOUT, async {
-        let response = client
-            .post(rpc_url)
-            .json(&payload)
-            .send()
-            .await
-            .map_err(|e| anyhow::anyhow!("Failed to get slot: {:?}", e))?;
-
-        let json: serde_json::Value = response
-            .json()
-            .await
-            .map_err(|e| anyhow::anyhow!("Failed to parse slot response: {:?}", e))?;
-
-        json.get("result")
-            .and_then(|v| v.as_u64())
-            .ok_or_else(|| anyhow::anyhow!("Failed to extract slot from response"))
-    })
-    .await;
-
-    match result {
-        Ok(inner) => inner,
-        Err(_) => Err(anyhow::anyhow!(
-            "getSlot request timed out after {}s",
-            RPC_REQUEST_TIMEOUT.as_secs()
-        )),
-    }
+    let result = send_rpc_request(client, rpc_url, &payload).await?;
+    result
+        .as_u64()
+        .ok_or_else(|| anyhow::anyhow!("Failed to extract slot from response"))
 }

237-240: is_localhost misses IPv6 loopback.
Requests against [::1] won’t be detected as localhost.

♻️ Minimal fix
 pub fn is_localhost(rpc_url: &str) -> bool {
-    rpc_url.contains("localhost") || rpc_url.contains("127.0.0.1")
+    rpc_url.contains("localhost")
+        || rpc_url.contains("127.0.0.1")
+        || rpc_url.contains("[::1]")
 }
forester/src/compressible/mint/bootstrap.rs (1)

37-37: Consider configuring the HTTP client with timeouts.

The reqwest::Client::new() uses default settings. For production resilience, configure connection and request timeouts to prevent indefinite hangs if the RPC endpoint becomes unresponsive.

forester/tests/test_compressible_mint.rs (1)

57-69: Comment mismatches rent_payment behavior.

The comment says "CreateMint auto-decompresses (rent_payment > 0)" but the test sets rent_payment = 0. This was flagged in a previous review.

🔧 Suggested comment fix
-    // Build params - CreateMint auto-decompresses (rent_payment > 0)
+    // Build params - rent_payment=0 keeps the mint immediately compressible for testing
forester/src/lib.rs (1)

299-315: Reconsider making PDA bootstrap interruptible for consistency and graceful shutdown.

Passing None for shutdown_rx prevents PDA bootstrap interruption, yet the CToken bootstrap receives Some(shutdown_bootstrap_rx) and honors shutdown signals (line 250). The infrastructure exists—all three bootstrap functions support the same pattern—but PDA and Mint are explicitly disabled with "runs to completion" comments. If graceful shutdown matters for CToken, it likely matters for PDA and Mint too. Consider passing the shutdown signal through instead of None.

forester/src/compressible/config.rs (1)

35-38: Docstring still says hex, but parser expects base58.

This mismatch is user-facing and can mislead CLI usage. Update the comment to align with the actual parsing format.

📌 Doc fix
-/// Can be specified via CLI (using `program_id:discriminator_hex` format)
+/// Can be specified via CLI (using `program_id:discriminator_base58` format)
forester/src/compressible/mint/compressor.rs (3)

128-147: Return an error when the batch tx isn’t confirmed.

Right now this returns Ok(signature) even when confirm_transaction is false, which diverges from compress() behavior and can suppress retries.

✅ Align confirmation handling
-        if confirmed {
-            // Only remove from tracker after confirmed
-            for mint_state in mint_states {
-                self.tracker.remove(&mint_state.pubkey);
-            }
-            info!("Batched CompressAndCloseMint tx confirmed: {}", signature);
-        } else {
-            tracing::warn!(
-                "CompressAndCloseMint tx not confirmed: {} - accounts kept in tracker for retry",
-                signature
-            );
-        }
-
-        Ok(signature)
+        if confirmed {
+            for mint_state in mint_states {
+                self.tracker.remove(&mint_state.pubkey);
+            }
+            info!("Batched CompressAndCloseMint tx confirmed: {}", signature);
+            Ok(signature)
+        } else {
+            tracing::warn!(
+                "CompressAndCloseMint tx not confirmed: {} - accounts kept in tracker for retry",
+                signature
+            );
+            Err(anyhow::anyhow!(
+                "Batched CompressAndCloseMint tx not confirmed: {}",
+                signature
+            ))
+        }

52-121: Split batched transactions to avoid Solana tx size limits.

Packing many CompressAndCloseMint instructions into one tx can exceed Solana’s size cap, causing runtime failure. Add chunking or a max batch size.

✅ Possible batching guard
+        const MAX_MINTS_PER_TX: usize = 2; // adjust after measuring instruction size
+        if mint_states.len() > MAX_MINTS_PER_TX {
+            return Err(anyhow::anyhow!(
+                "Too many mints ({}) for single transaction, max is {}",
+                mint_states.len(),
+                MAX_MINTS_PER_TX
+            ));
+        }

155-186: Guard max_concurrent == 0 to avoid buffer_unordered panic.

buffer_unordered(0) panics in futures; add a precondition and surface an error.

✅ Precondition guard
     pub async fn compress_batch_concurrent(
         &self,
         mint_states: &[MintAccountState],
         max_concurrent: usize,
         cancelled: Arc<AtomicBool>,
     ) -> Vec<std::result::Result<(Signature, MintAccountState), (MintAccountState, anyhow::Error)>>
     {
         if mint_states.is_empty() {
             return Vec::new();
         }
+        if max_concurrent == 0 {
+            return mint_states
+                .iter()
+                .cloned()
+                .map(|state| Err((state, anyhow::anyhow!("max_concurrent must be > 0"))))
+                .collect();
+        }
forester/src/compressible/ctoken/state.rs (1)

14-41: Fix compressible_slot to start after the last funded epoch.
last_funded_epoch * SLOTS_PER_EPOCH yields the start of the last funded epoch, which makes accounts compressible one epoch too early. The slot should be the start of the next epoch.

🧩 Proposed fix
-    Ok(last_funded_epoch * SLOTS_PER_EPOCH)
+    Ok((last_funded_epoch + 1) * SLOTS_PER_EPOCH)
#!/bin/bash
# Locate the rent semantics to confirm expected epoch boundary.
rg -n "fn get_last_funded_epoch" -g'*.rs'
rg -n "compressible_slot|last_funded_epoch|SLOTS_PER_EPOCH" -g'*.rs'
rg -n "prepaid|compressible.*epoch|last_funded_epoch" -g'*.rs'
forester/src/compressible/pda/state.rs (1)

28-45: Fix compressible_slot to start after the last funded epoch.
Same off-by-one risk here: use the start slot of the epoch after last_funded_epoch.

🧩 Proposed fix
-    Ok(last_funded_epoch * SLOTS_PER_EPOCH)
+    Ok((last_funded_epoch + 1) * SLOTS_PER_EPOCH)
#!/bin/bash
# Locate the rent semantics to confirm expected epoch boundary.
rg -n "fn get_last_funded_epoch" -g'*.rs'
rg -n "compressible_slot|last_funded_epoch|SLOTS_PER_EPOCH" -g'*.rs'
rg -n "prepaid|compressible.*epoch|last_funded_epoch" -g'*.rs'
forester/src/epoch_manager.rs (1)

1968-2106: PDA/Mint compression can still overrun the eligibility window.
cancelled is only checked before scheduling work; if consecutive_eligibility_end passes mid-run, in-flight tasks may still send transactions while ineligible. Consider a background watcher that flips cancelled once the slot is reached (or chunk work and re-check between chunks).

🔧 Sketch: cancel when eligibility window closes
+        let cancel_on_window_end = {
+            let cancelled = cancelled.clone();
+            let slot_tracker = self.slot_tracker.clone();
+            tokio::spawn(async move {
+                loop {
+                    if slot_tracker.estimated_current_slot() >= consecutive_eligibility_end {
+                        cancelled.store(true, Ordering::Relaxed);
+                        break;
+                    }
+                    tokio::time::sleep(Duration::from_millis(200)).await;
+                }
+            })
+        };

         // Process each configured PDA program
         for program_config in &config.pda_programs {
             if cancelled.load(Ordering::Relaxed) {
                 break;
             }
             ...
         }
+        cancel_on_window_end.abort();

Comment on lines +8 to +9
spawn.sh
spawn_devnet.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, explore the forester directory structure
ls -la forester/ | head -20

Repository: Lightprotocol/light-protocol

Length of output: 933


🏁 Script executed:

# Look for template/example versions of spawn scripts
find forester -type f \( -name "*spawn*" -o -name "*.example" -o -name "*.template" \) 2>/dev/null

Repository: Lightprotocol/light-protocol

Length of output: 93


🏁 Script executed:

# Check for README or documentation files
find forester -type f \( -name "README*" -o -name "*.md" \) | head -20

Repository: Lightprotocol/light-protocol

Length of output: 90


🏁 Script executed:

# Check for CLAUDE.md in root and forester directory
find . -maxdepth 2 -name "CLAUDE.md" -o -name "claude.md"

Repository: Lightprotocol/light-protocol

Length of output: 131


🏁 Script executed:

# Search for mentions of spawn.sh or spawn_devnet.sh in documentation
rg -l "spawn\.(sh|devnet)" forester/ --type md 2>/dev/null | head -10

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the gitignore file to see full context
cat -n forester/.gitignore

Repository: Lightprotocol/light-protocol

Length of output: 216


🏁 Script executed:

# Check README for spawn scripts
cat forester/README.md

Repository: Lightprotocol/light-protocol

Length of output: 6562


🏁 Script executed:

# Check root CLAUDE.md for guidance
cat CLAUDE.md

Repository: Lightprotocol/light-protocol

Length of output: 11430


🏁 Script executed:

# Also check if there are any comments or docs about these scripts elsewhere
rg -n "spawn" forester/ --type md --type sh --type toml -C2

Repository: Lightprotocol/light-protocol

Length of output: 54


Document spawn scripts or align ignore patterns with .env precedent.

The repository uses .env.example as a template for ignored developer config files, but spawn.sh and spawn_devnet.sh lack equivalent templates or documentation. Either add spawn.sh.example and spawn_devnet.sh.example to establish what developers need to configure locally, or document these scripts in the README to clarify their purpose and whether they're user-created (template needed) or auto-generated (no template needed). Currently they're absent from both the README and CLAUDE.md documentation.

🤖 Prompt for AI Agents
In `@forester/.gitignore` around lines 8 - 9, The repo ignores spawn.sh and
spawn_devnet.sh but lacks example templates or docs; either add spawn.sh.example
and spawn_devnet.sh.example (matching the .env.example pattern) with required
variables and usage comments, or update README and CLAUDE.md to document the
purpose, required configuration, and whether these scripts are user-created or
generated; ensure the .gitignore entry stays in sync with the chosen approach
and reference the exact filenames spawn.sh and spawn_devnet.sh in the
docs/templates.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
forester/tests/legacy/batched_address_test.rs (1)

86-93: Missing import: register_test_forester is used but not imported.

At line 86, register_test_forester is called, but the import at line 4 only includes update_test_forester:

use forester_utils::{registry::update_test_forester, rpc_pool::SolanaRpcPoolBuilder};

This will cause a compilation error.

Proposed fix
-use forester_utils::{registry::update_test_forester, rpc_pool::SolanaRpcPoolBuilder};
+use forester_utils::{registry::{register_test_forester, update_test_forester}, rpc_pool::SolanaRpcPoolBuilder};
🤖 Fix all issues with AI agents
In `@forester/src/compressible/ctoken/bootstrap.rs`:
- Around line 52-55: The code currently panics when building the reqwest client;
update the client creation in bootstrap.rs to propagate the build error instead
of calling expect by using the try operator (e.g., let client =
reqwest::Client::builder()... .build()?;), matching the error-propagation
pattern used in tree_data_sync.rs so the enclosing function's Result<()> can
return the error; ensure any necessary error trait/imports are available for the
function's error type.

In `@forester/src/compressible/mint/bootstrap.rs`:
- Around line 37-40: Replace the panic-causing .expect("Failed to build HTTP
client") on reqwest::Client::builder().timeout(...).build().expect(...) with the
? operator to propagate errors (i.e., let client =
reqwest::Client::builder().timeout(...).build()?;), ensuring the surrounding
function return type remains Result<()> so the build error is returned rather
than panicking; apply the same change to the analogous occurrences in the pda
and ctoken bootstrap modules.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
forester/src/compressible/config.rs (1)

148-176: Update the docstring: pda_programs currently only accepts CLI values, not config file values.

The docstring claims "CLI values are merged with config file values," but the config loader only processes args.compressible_pda_programs from CLI. The with_pda_programs() method overwrites rather than merges. There is no config file loading for this field. Either implement the merge logic to read from config + CLI, or correct the docstring to reflect that only CLI --pda-program flags are supported.

🤖 Fix all issues with AI agents
In `@forester/src/lib.rs`:
- Around line 308-330: The PDA bootstrap tokio::spawn block (using
retry_with_backoff, RetryConfig, and calling
compressible::pda::bootstrap_pda_accounts with pda_tracker_clone and rpc_url) is
non-interruptible; modify it to accept an optional shutdown receiver (similar to
the CToken bootstrap pattern), pass that receiver into the spawned task and into
the retry closure, and inside the async worker use tokio::select! (or an
explicit check) to abort retries and the
compressible::pda::bootstrap_pda_accounts call early when the shutdown signal is
received so graceful shutdown can preempt long bootstrap work or retry backoff.

In `@forester/src/utils.rs`:
- Around line 92-136: The attempt counter in retry_with_backoff is declared as
u32 (attempt) but other helpers (get_protocol_config_with_retry,
get_slot_with_retry) use u64; change the attempt variable in retry_with_backoff
to u64 (e.g. let mut attempt = 0u64), update any literal suffixes or comparisons
accordingly, and ensure comparisons against config.max_attempts (and the
formatting in info!/warn! macros) use the same u64 type so types align across
retry_with_backoff, get_protocol_config_with_retry, and get_slot_with_retry.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@forester/src/lib.rs`:
- Around line 246-272: The CToken bootstrap task currently runs
retry_with_backoff to completion and ignores shutdown signals; update the
tokio::spawn block that captures shutdown_bootstrap_rx/tracker_clone/rpc_url so
the call to retry_with_backoff (which calls
compressible::bootstrap_ctoken_accounts) is run inside a tokio::select!
alongside awaiting the shutdown receiver (shutdown_bootstrap_rx.recv() or its
equivalent), so that if a shutdown signal arrives you abort/cancel the bootstrap
future and skip further retries, log the early exit, and still drop/shutdown the
receiver; ensure you mirror the pattern used by the PDA and Mint bootstrap tasks
(using select! to prefer shutdown) and keep the retry logic inside the selected
branch.

In `@forester/tests/test_compressible_mint.rs`:
- Around line 469-474: The spawned task running account_subscriber.run() can
panic inside the task and hide failures until the join is awaited late; modify
the spawn so failures surface immediately by sending the run() Result back to
the test (e.g. create a oneshot channel, spawn the task to run
account_subscriber.run().await and tx.send(result) and keep
account_subscriber_handle to await/try_recv early), or alternately keep the
JoinHandle and periodically poll/await it (or call abort() on shutdown and check
the handle) so that any panic/error from account_subscriber.run() is observed
during the main test rather than only at the final join; adjust code around
account_subscriber_handle and account_subscriber.run() accordingly.
- Around line 383-391: The test currently treats an account with zero lamports
as closed by checking `mint_after.is_none() || mint_after.as_ref().map(|a|
a.lamports) == Some(0)`—update the loop to only consider the account closed when
`rpc.get_account(mint_pda)` returns `None` (i.e., `mint_after.is_none()`),
removing the `lamports == 0` branch; adjust any related variable logic around
`mint_after`, `mint_pda`, and `account_closed` in the test to rely solely on the
`is_none()` check to more precisely detect actual account closure.

@sergeytimoshin sergeytimoshin merged commit e049bbe into main Jan 22, 2026
33 checks passed
@sergeytimoshin sergeytimoshin deleted the sergey/forester-compressed-pda branch January 22, 2026 17:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants