-
Couldn't load subscription status.
- Fork 2
Add F3 light client functionality #27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
WalkthroughAdds three new workspace crates (merkle, rpc, lightclient); implements BDN-style BLS aggregation scaffold and a thread-safe LRU‑backed BLS verifier; modularizes and extends GPBFT (payload, powertable, justification, ECChain merkle key); updates certs to use verifier; adds manifests, examples, tests, and narrows wasm checks. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant LC as LightClient
participant RPC as RPCClient
participant Node as Filecoin_RPC
participant CV as certs::validate_finality_certificates
participant GP as gpbft (payload/chain/powertable)
participant MK as merkle::tree
participant BL as blssig::BLSVerifier
User->>LC: new(endpoint, network)
User->>LC: initialize(instance)
LC->>RPC: get_power_table(instance)
RPC->>Node: Filecoin.F3GetPowerTableByInstance
Node-->>RPC: PowerTable (RPC types)
RPC-->>LC: Vec<PowerEntry>
LC->>LC: convert_power_entry -> PowerEntries
User->>LC: get_certificate(instance)
LC->>RPC: get_certificate(instance)
RPC->>Node: Filecoin.F3GetCertificate
Node-->>RPC: FinalityCertificate
RPC-->>LC: FinalityCertificate
LC->>LC: convert_certificate -> certs::FinalityCertificate
User->>LC: validate_certificates(state, certs)
LC->>CV: validate_finality_certificates(&verifier, ...)
CV->>GP: Payload::serialize_for_signing()
GP->>MK: ECChain.key() -> tree(values)
MK-->>GP: chain key (root)
CV->>BL: verify_aggregate(payload_bytes, agg_sig, signers)
BL->>BL: aggregate keys/sigs & verify (BDN scaffold + cache)
BL-->>CV: Ok / Err
CV-->>LC: updated state or error
LC-->>User: result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
🔭 Outside diff range comments (1)
certs/src/lib.rs (1)
674-710: Tests will panic: MockVerifier methods are todo!() but are invokedverify_signature calls Verifier::verify_aggregate; with todo!() these tests will panic. Make MockVerifier return Ok values.
Apply this diff:
- #[derive(Debug, thiserror::Error)] - #[error("Mock verifier error")] - struct MockVerifierError; + #[derive(Debug, thiserror::Error)] + #[error("Mock verifier error")] + struct MockVerifierError; @@ impl Verifier for MockVerifier { type Error = MockVerifierError; @@ - ) -> std::result::Result<(), Self::Error> { - todo!() + ) -> std::result::Result<(), Self::Error> { + Ok(()) } @@ - ) -> std::result::Result<Vec<u8>, Self::Error> { - todo!() + ) -> std::result::Result<Vec<u8>, Self::Error> { + Ok(Vec::new()) } @@ - ) -> std::result::Result<(), Self::Error> { - todo!() + ) -> std::result::Result<(), Self::Error> { + Ok(()) } }If you keep the insecure_skip_sig_verify feature, these tests won’t depend on the verifier at all; otherwise, returning Ok here keeps unit tests focused on certificate flow logic.
🧹 Nitpick comments (30)
rust-toolchain.toml (1)
2-3: Toolchain bump to 1.82: align CI and declare MSRV in Cargo metadataGood to see the toolchain update. Please also:
- Ensure CI images/runners use 1.82.0 to avoid drift and clippy lint deltas.
- Declare MSRV in Cargo metadata so crates fail early on older compilers.
Add this to the workspace Cargo.toml to advertise MSRV (compliments rust-toolchain.toml):
[workspace.package] authors = ["ChainSafe Systems <forest@chainsafe.io>"] repository = "https://github.com/ChainSafe/rust-f3" license = "MIT OR Apache-2.0" edition = "2021" +rust-version = "1.82"merkle/Cargo.toml (1)
1-14: Add rust-version and consider publish settingsMinor polish for crate metadata:
- Inherit MSRV from workspace to align with the 1.82 toolchain.
- If this crate is internal-only, consider
publish = falseto avoid accidental publication.Apply in this file (after you add rust-version to [workspace.package] at the root as suggested):
[package] name = "filecoin-f3-merkle" version = "0.1.0" authors.workspace = true repository.workspace = true license.workspace = true edition.workspace = true +rust-version.workspace = true +# publish = falseCargo.toml (1)
5-10: Declare MSRV alongside toolchain bumpTo reflect the new toolchain in crate metadata (helps users and CI), add rust-version to workspace package:
[workspace.package] authors = ["ChainSafe Systems <forest@chainsafe.io>"] repository = "https://github.com/ChainSafe/rust-f3" license = "MIT OR Apache-2.0" edition = "2021" +rust-version = "1.82"f3manifest_calibrationnet.json (2)
9-11: Verify InitialPowerTable CID determinism vs asset contentPlease confirm that the CID here matches the CBOR serialization produced from f3initialpowertable_calibrationnet.json using the same canonicalization the code uses at runtime. A mismatch would break bootstrap verification.
I can add (a) a unit test that re-serializes the JSON asset into the internal PowerEntries, computes the CBOR CID, and asserts equality with this string, or (b) a small CLI example to print the computed CID for manual cross-checking. Preference?
2-41: Document units to avoid ambiguityDurations and backoff values appear to be in nanoseconds and multipliers as floats. Consider documenting units in a README or schema to reduce misconfiguration risk.
certs/src/error.rs (1)
66-68: Preserve the underlying error as a source instead of flattening to StringFlattening the verifier error to String loses error context and prevents error chaining. If feasible, keep the underlying error as a #[source] (e.g., Box<dyn Error + Send + Sync + 'static> or the concrete BLS error type) and drop PartialEq on the enum or restrict it to variants that can derive Eq.
Example change:
- #[error("BLS signature verification failed for instance {instance}: {error}")] - SignatureVerificationFailed { instance: u64, error: String }, + #[error("BLS signature verification failed for instance {instance}")] + SignatureVerificationFailed { + instance: u64, + #[source] + error: Box<dyn std::error::Error + Send + Sync + 'static>, + },Note: This change requires removing PartialEq derive from the enum or implementing PartialEq manually for the variants where it is meaningful.
gpbft/src/api.rs (1)
5-5: Consider removing the dedicated Display import if using std::error::ErrorMinor: if you adopt std::error::Error in the associated type bound, Display is implied by Error; you can remove the separate Display import or keep it if you reference Display elsewhere.
-use std::fmt::Display; +// use std::fmt::Display; // Not needed if Error bound is std::error::Errorlightclient/Cargo.toml (2)
15-16: Enable required tokio features explicitly for examples and async runtimeIf the lightclient or examples use #[tokio::main], spawn tasks, timers, or networking, explicitly enable tokio features to avoid build/CI surprises, especially when relying on workspace defaults.
-tokio.workspace = true +tokio = { workspace = true, features = ["rt-multi-thread", "macros", "time", "sync"] }If workspace already sets these features, this can be omitted.
9-14: Pin internal crate versions or add path+version for publish hygieneIf you intend to publish later, consider adding version fields alongside path for internal crates, or set publish = false to avoid accidental publication with path dependencies.
Option A: mark crate as non-publishable for now
[package] name = "filecoin-f3-lightclient" version = "0.1.0" +publish = falseOption B: add versions to internal deps (once you plan to publish)
filecoin-f3-rpc = { path = "../rpc", version = "0.1" } filecoin-f3-certs = { path = "../certs", version = "0.1" } filecoin-f3-gpbft = { path = "../gpbft", version = "0.1" } filecoin-f3-blssig = { path = "../blssig", version = "0.1" }gpbft/src/justification.rs (1)
5-5: Derive Debug/Clone/PartialEq/Eq for better ergonomics and testability.This type will likely flow across APIs and tests. Deriving these traits improves debugging and avoids boilerplate. If you later need serialization, consider a gated serde derive once dependent types support it.
Apply this diff:
-/// Represents a justification for a decision in the GPBFT consensus protocol -pub struct Justification { +/// Represents a justification for a decision in the GPBFT consensus protocol +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct Justification {gpbft/src/test_utils.rs (2)
12-13: Stale comment references an unwrap that no longer exists.The function no longer uses unwrap at this call-site. Update the comment to avoid confusion.
Apply this diff:
- // Unwrap is fine here as `powertable_cid` should never fail. + // `powertable_cid` is infallible and returns a deterministic CID.
26-30: Confirm canonical CBOR matches prior encoding to keep CIDs stable.Previously, to_vec from fvm_ipld_encoding implied canonical DAG-CBOR. Please double-check that PowerEntries::serialize_cbor() produces precisely the same canonical encoding; otherwise, derived CIDs will drift from reference implementations.
You can add an assertion test comparing both encodings to guard against regressions:
#[cfg(test)] mod tests { use super::*; use fvm_ipld_encoding::to_vec; #[test] fn powertable_cbor_matches_reference() { let pt = create_powertable(); let new = pt.serialize_cbor(); let ref_bytes = to_vec(&pt).expect("reference encoding should not fail"); assert_eq!(new, ref_bytes, "serialize_cbor must match reference DAG-CBOR"); } }If exact byte-for-byte parity is not intended, document the encoding differences and adjust CID derivation accordingly.
lightclient/examples/certificates_validation.rs (2)
36-53: Consider adding error handling for network connectivity issues.The initialization properly sets up the client and fetches the initial state. However, consider adding retry logic or more specific error messages for network failures to improve the user experience.
148-150: Consider using hex crate for consistent hex encoding.While the current implementation works, using the
hexcrate (already a dependency in the workspace) would be more consistent with the rest of the codebase.-fn to_hex(bytes: &[u8]) -> String { - bytes.iter().map(|b| format!("{:02x}", b)).collect() -} +fn to_hex(bytes: &[u8]) -> String { + hex::encode(bytes) +}blssig/src/bdn/mod.rs (2)
26-51: Consider adding coefficient calculation for future BDN implementationThe aggregation currently performs standard BLS aggregation without the BDN coefficient weighting. When implementing the proper BDN scheme, you'll need to calculate coefficients based on the hash of all public keys.
For future implementation, consider this structure:
pub fn aggregate_sigs(&self, sigs: &[Vec<u8>]) -> Result<Vec<u8>, BLSError> { if sigs.len() != self.pub_keys.len() { return Err(BLSError::LengthMismatch { pub_keys: self.pub_keys.len(), sigs: sigs.len(), }); } + // TODO: Calculate BDN coefficients + // let coefficients = self.calculate_bdn_coefficients()?; + let mut aggregated_point = G2Projective::identity(); - for sig_bytes in sigs.iter() { + for (i, sig_bytes) in sigs.iter().enumerate() { // Deserialize signature to G2 point let signature = Signature::from_bytes(sig_bytes).map_err(|_| BLSError::SignatureDeserialization)?; let sig_point: G2Projective = signature.into(); - // Standard BLS aggregation - aggregated_point += sig_point; + // TODO: Apply BDN coefficient weighting + // let weighted_point = sig_point * coefficients[i]; + // aggregated_point += weighted_point; + aggregated_point += sig_point; // temporary standard aggregation }
53-71: Public key aggregation needs coefficient weighting for BDNSimilar to signature aggregation, the public key aggregation should apply BDN coefficients to prevent rogue public-key attacks.
Structure for future BDN implementation:
pub fn aggregate_pub_keys(&self) -> Result<PublicKey, BLSError> { + // TODO: Calculate BDN coefficients + // let coefficients = self.calculate_bdn_coefficients()?; + let mut aggregated_point = G1Projective::identity(); - for pub_key_bytes in &self.pub_keys { + for (i, pub_key_bytes) in self.pub_keys.iter().enumerate() { let public_key = PublicKey::from_bytes(&pub_key_bytes.0) .map_err(|_| BLSError::PublicKeyDeserialization)?; // Convert public key to G1Projective for curve operations let pub_key_point: G1Projective = public_key.into(); - // Standard BLS aggregation - aggregated_point += pub_key_point; + // TODO: Apply BDN coefficient weighting + // let weighted_point = pub_key_point * coefficients[i]; + // aggregated_point += weighted_point; + aggregated_point += pub_key_point; // temporary standard aggregation }rpc/src/lib.rs (1)
19-28: Consider adding retry logic for network resilienceRPC calls can fail due to transient network issues. Consider implementing retry logic with exponential backoff.
You could wrap the RPC calls with a retry mechanism. Consider using a crate like
backoffor implementing a simple retry wrapper:async fn with_retry<T, F, Fut>(mut f: F) -> Result<T> where F: FnMut() -> Fut, Fut: Future<Output = Result<T>>, { let mut retries = 3; loop { match f().await { Ok(result) => return Ok(result), Err(e) if retries > 0 => { retries -= 1; tokio::time::sleep(Duration::from_millis(100 * (4 - retries))).await; } Err(e) => return Err(e), } } }lightclient/src/rpc_to_internal.rs (1)
79-84: Inconsistent PubKey construction patternsThe code uses
PubKey::new()here but direct constructionPubKey()on line 14. This inconsistency could be confusing.Use consistent construction pattern:
signing_key: match val.signing_key { Some(base64_str) => { - PubKey::new(general_purpose::STANDARD.decode(&base64_str)?) // base64 -> bytes -> PubKey + PubKey(general_purpose::STANDARD.decode(&base64_str)?) // base64 -> bytes -> PubKey } None => PubKey::default(), },rpc/src/types.rs (2)
68-68: Unused type alias ActorIDThe type alias
ActorIDis defined but not used anywhere in the file. The actual usage isActorId(lowercase 'd') imported fromfilecoin_f3_gpbft.Remove the unused type alias:
-pub type ActorID = u64;
22-23: Consider documenting the default behavior for PowerTableDeltaThe
#[serde(default)]attribute means an empty vector will be used if the field is missing. This behavior should be documented for API users.Add a doc comment:
+ /// Power table deltas. Defaults to an empty vector if not present in the response. #[serde(rename = "PowerTableDelta", default)] pub power_table_delta: Vec<PowerTableDelta>,blssig/src/verifier/mod.rs (3)
76-83: Use bls_signatures::verify for single-sig to avoid heap allocationsNo need to allocate Vecs and call verify_messages for the single-signer case.
Apply this diff:
- // Verify using bls-signatures - let messages = vec![msg]; - let public_keys = vec![pub_key]; - match verify_messages(&signature, &messages, &public_keys) { - true => Ok(()), - false => Err(BLSError::SignatureVerificationFailed), - } + // Verify using bls-signatures + if bls_signatures::verify(&signature, msg, &pub_key) { + Ok(()) + } else { + Err(BLSError::SignatureVerificationFailed) + }
120-122: Differentiate empty pub_keys vs empty sigs for clearer errorsReturning EmptyPublicKeys when sigs.is_empty() misleads debugging. Consider a dedicated error for empty signatures or a more precise variant.
If you don’t want to add a new error variant, at least separate the checks to improve clarity:
- if pub_keys.is_empty() || sigs.is_empty() { - return Err(BLSError::EmptyPublicKeys); - } + if pub_keys.is_empty() { + return Err(BLSError::EmptyPublicKeys); + } + if sigs.is_empty() { + return Err(BLSError::SignatureVerificationFailed); + }
50-51: Consider an LRU instead of full cache clearClearing the entire cache when the limit is hit can cause bursts of re-deserializations. An LRU with a modest capacity would smooth performance.
I can wire in the lru crate (or hashlink) if you’d like.
lightclient/src/lib.rs (2)
53-72: validate_certificates doesn’t mutate self; take &self instead of &mut selfThis removes an unnecessary mut borrow and eases integration.
- pub fn validate_certificates( - &mut self, + pub fn validate_certificates( + &self, state: &LightClientState, certs: &[certs::FinalityCertificate], ) -> certs::Result<LightClientState> {
25-33: Surface errors consistently across the APInew and get_certificate return anyhow::Result, while validate_certificates returns certs::Result. Consider unifying return types for a cleaner public API.
I can prepare a patch standardizing to a single error type or a domain-specific Result alias for the lightclient crate.
gpbft/src/lib.rs (1)
33-33: Gate test utilities behind cfg(test) or a feature flagShipping test_utils in the public API by default increases surface and can bloat downstream crates.
-pub mod test_utils; +#[cfg(test)] +pub mod test_utils;If external crates rely on these utilities, gate behind a feature like test-utils and default it off.
certs/src/lib.rs (1)
126-204: apply_power_table_diffs: semantics look sound; one minor doc nitThe function returns the power table sorted by power descending as documented; consider noting that equal-power ordering is stable/unspecified.
No code changes necessary; optional doc addition near the function comment.
gpbft/src/payload.rs (1)
137-263: Thorough tests; small improvement: assert prefix length explicitlyTests exercise multiple phases and the DECIDE vector. Optionally assert the prefix length constant (15) to catch accidental changes.
No code diff necessary; purely optional assertion addition in tests.
gpbft/src/powertable.rs (2)
64-66: Consider adding Serialize back to PowerEntriesThe
PowerEntriesstruct removed theSerializederive but added a customserialize_cbormethod. This asymmetry might cause confusion for users expecting standard serde serialization.Consider either:
- Re-adding
Serializederive for standard serde compatibility, or- Adding documentation explaining why standard serialization is not available and pointing to
serialize_cbor/// A collection of `PowerEntry` instances representing the power distribution in the network +/// Note: Use `serialize_cbor()` for CBOR serialization compatible with go-f3 #[derive(Clone, Deserialize, PartialEq, Debug)] #[serde(transparent)] pub struct PowerEntries(pub Vec<PowerEntry>);
245-276: Add error handling for malformed JSON entriesThe
load_powertablefunction uses.expect()for JSON parsing which will panic on malformed data. Consider more graceful error handling for production robustness.fn load_powertable(path: &str) -> PowerEntries { // ... existing code ... let mut entries = Vec::new(); for entry_json in entries_json { - let id = entry_json["ID"].as_u64().expect("invalid ID"); - let power_str = entry_json["Power"].as_str().expect("invalid Power"); - let power = power_str.parse::<u64>().expect("failed to parse Power"); - let pubkey_b64 = entry_json["PubKey"].as_str().expect("invalid PubKey"); + let id = entry_json["ID"].as_u64().ok_or_else(|| anyhow!("invalid ID"))?; + let power_str = entry_json["Power"].as_str().ok_or_else(|| anyhow!("invalid Power"))?; + let power = power_str.parse::<u64>().map_err(|e| anyhow!("failed to parse Power: {}", e))?; + let pubkey_b64 = entry_json["PubKey"].as_str().ok_or_else(|| anyhow!("invalid PubKey"))?;However, since this is test code, the current approach is acceptable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (31)
Cargo.toml(2 hunks)blssig/Cargo.toml(1 hunks)blssig/src/bdn/mod.rs(1 hunks)blssig/src/lib.rs(1 hunks)blssig/src/verifier/mod.rs(1 hunks)blssig/src/verifier/tests.rs(1 hunks)certs/Cargo.toml(1 hunks)certs/src/error.rs(2 hunks)certs/src/lib.rs(17 hunks)f3initialpowertable_calibrationnet.json(1 hunks)f3manifest_calibrationnet.json(1 hunks)f3manifest_filecoin.json(1 hunks)gpbft/Cargo.toml(1 hunks)gpbft/src/api.rs(1 hunks)gpbft/src/chain.rs(4 hunks)gpbft/src/justification.rs(1 hunks)gpbft/src/lib.rs(1 hunks)gpbft/src/payload.rs(1 hunks)gpbft/src/powertable.rs(5 hunks)gpbft/src/test_utils.rs(1 hunks)lightclient/Cargo.toml(1 hunks)lightclient/examples/certificates_validation.rs(1 hunks)lightclient/src/lib.rs(1 hunks)lightclient/src/rpc_to_internal.rs(1 hunks)merkle/Cargo.toml(1 hunks)merkle/src/lib.rs(1 hunks)merkle/src/tests.rs(1 hunks)rpc/Cargo.toml(1 hunks)rpc/src/lib.rs(1 hunks)rpc/src/types.rs(1 hunks)rust-toolchain.toml(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (14)
gpbft/src/api.rs (1)
gpbft/src/chain.rs (2)
fmt(106-110)fmt(296-313)
lightclient/src/rpc_to_internal.rs (3)
gpbft/src/chain.rs (2)
suffix(200-206)new(147-153)gpbft/src/payload.rs (1)
new(29-43)certs/src/lib.rs (1)
new(96-119)
merkle/src/tests.rs (1)
merkle/src/lib.rs (1)
tree(22-29)
gpbft/src/chain.rs (2)
merkle/src/lib.rs (1)
tree(22-29)gpbft/src/test_utils.rs (1)
create_test_tipset(8-16)
blssig/src/verifier/tests.rs (3)
blssig/src/bdn/mod.rs (1)
new(16-24)blssig/src/verifier/mod.rs (1)
new(53-57)lightclient/src/lib.rs (1)
new(26-32)
blssig/src/bdn/mod.rs (7)
blssig/src/verifier/tests.rs (2)
new(13-19)public_key(21-23)blssig/src/verifier/mod.rs (1)
new(53-57)gpbft/src/payload.rs (1)
new(29-43)lightclient/src/lib.rs (1)
new(26-32)gpbft/src/chain.rs (1)
new(147-153)rpc/src/lib.rs (1)
new(14-17)certs/src/lib.rs (1)
new(96-119)
lightclient/examples/certificates_validation.rs (5)
gpbft/src/powertable.rs (2)
is_strong_quorum(160-162)signer_scaled_total(147-157)certs/src/lib.rs (3)
signer_scaled_total(429-463)new(96-119)cert(407-412)lightclient/src/lib.rs (2)
new(26-32)power_table(36-39)gpbft/src/chain.rs (3)
new(147-153)suffix(200-206)key(275-286)rpc/src/lib.rs (1)
new(14-17)
blssig/src/verifier/mod.rs (5)
blssig/src/bdn/mod.rs (1)
new(16-24)blssig/src/verifier/tests.rs (1)
new(13-19)gpbft/src/payload.rs (1)
new(29-43)lightclient/src/lib.rs (1)
new(26-32)gpbft/src/api.rs (3)
verify(23-23)aggregate(33-33)verify_aggregate(46-51)
lightclient/src/lib.rs (5)
certs/src/lib.rs (2)
validate_finality_certificates(340-426)new(96-119)blssig/src/verifier/mod.rs (1)
new(53-57)gpbft/src/chain.rs (1)
new(147-153)rpc/src/lib.rs (2)
new(14-17)get_certificate(30-39)lightclient/src/rpc_to_internal.rs (2)
convert_power_entry(10-16)convert_certificate(19-37)
rpc/src/lib.rs (1)
lightclient/src/lib.rs (2)
new(26-32)get_certificate(48-51)
gpbft/src/powertable.rs (1)
gpbft/src/chain.rs (2)
cid_from_bytes(290-293)new(147-153)
gpbft/src/payload.rs (1)
gpbft/src/chain.rs (4)
new(147-153)serialize_for_signing(84-102)cid_from_bytes(290-293)new_unvalidated(156-158)
certs/src/lib.rs (2)
gpbft/src/chain.rs (2)
cid_from_bytes(290-293)new_unvalidated(156-158)gpbft/src/powertable.rs (2)
is_strong_quorum(160-162)signer_scaled_total(147-157)
gpbft/src/lib.rs (2)
gpbft/src/chain.rs (1)
cid_from_bytes(290-293)gpbft/src/powertable.rs (1)
is_strong_quorum(160-162)
🪛 Checkov (3.2.334)
f3initialpowertable_calibrationnet.json
[HIGH] 100-101: AWS Access Key
(CKV_SECRET_2)
🔇 Additional comments (39)
f3initialpowertable_calibrationnet.json (1)
43-51: Duplicate PubKey across two distinct IDs (116147 and 122890)Entries at Lines 43-51 and 48-51 share the same PubKey. If miners are expected to have unique BLS pubkeys, this may indicate a data issue. If not, please confirm this duplication is intentional.
Do you want me to generate a quick validator to catch duplicate PubKeys/IDs and non-numeric Power values for all manifests?
Cargo.toml (1)
17-35: Workspace dependency scope: confirm feature needs and avoid unneeded bloat
- jsonrpsee has both ws-client and http-client features enabled; if the light client only uses HTTP in practice, consider dropping ws-client to reduce transitive deps. If both are used, ignore.
- bls-signatures (0.15) + bls12_381 (0.8) mixing is common; verify no duplicate pairing crates lead to symbol clashes at link time across workspace members.
Do you want me to scan crate sources to map which members actually use ws vs http and propose minimal feature sets per crate?
f3manifest_filecoin.json (2)
21-23: Confirm numeric types expected by the config loader for fractional fieldsShort summary: I couldn't find a typed manifest/Gpbft config struct or any code that deserializes the backoff fields — the JSON fixtures contain fractional values, but no loader/site-of-deserialization was found to confirm whether floats are expected.
Relevant findings:
- Manifest fixtures containing fractional fields
- f3manifest_filecoin.json — RebroadcastBackoffExponent: 1.3, RebroadcastBackoffSpread: 0.1, RebroadcastBackoffMax: 60000000000 (lines ~21–23); BaseDecisionBackoffTable: [1.3, 1.69, ...] (line ~29).
- Code inspected
- gpbft/src/powertable.rs::load_cid_from_manifest (tests, ~lines 279–292) — uses serde_json::Value and only reads InitialPowerTable["/"] (CID). It does not parse the backoff fields.
- Search results
- No typed Manifest or Gpbft config struct found in the repo.
- No code references to keys RebroadcastBackoffExponent/RebroadcastBackoffSpread/BaseDecisionBackoffTable were found (only present in the JSON fixtures).
Request / next step:
- Please confirm where the manifest is parsed/loaded at runtime (path to the deserializer or config struct). If the loader is typed, ensure the fields are float (f64/f32) or adjust the manifest to use integer-compatible values. If you want, point me to the deserializer and I’ll re-check the expected numeric types.
9-11: Cannot verify InitialPowerTable CID — gateway fetches timed out / DNS failuresI attempted to fetch CID bafy2bzacecklgxd2eksmodvhgurqvorkg3wamgqkrunir3al2gchv2cikgmbu from multiple public IPFS gateways (ipfs.io, cloudflare-ipfs.com, w3s.link, etc.). All attempts failed (ipfs.io/w3s timed out, cloudflare DNS failed) so I could not confirm the JSON or its schema.
File to check:
- f3manifest_filecoin.json (lines ~9-11)
"InitialPowerTable": { "/": "bafy2bzacecklgxd2eksmodvhgurqvorkg3wamgqkrunir3al2gchv2cikgmbu" },Please verify one of the following and update the PR accordingly:
- Ensure the CID is pinned and reachable from public gateways (or provide an alternate public URL).
- Include the power-table JSON in the repo or attach it to the PR for review.
- Locally validate and paste the output (or fix) by running:
- ipfs cat /ipfs/bafy2bzacecklgxd2eksmodvhgurqvorkg3wamgqkrunir3al2gchv2cikgmbu | jq .
- or curl -sfL https://ipfs.io/ipfs/bafy2bzacecklgxd2eksmodvhgurqvorkg3wamgqkrunir3al2gchv2cikgmbu | jq .
Validation target: top-level object mapping IDs → objects containing fields ID (or id), Power (or power) and PubKey (or pubKey/pub_key/pubkey).
certs/Cargo.toml (2)
12-16: No action required — workspace enables serdederiveThe root workspace already enables serde's derive feature, and the certs crate uses the workspace dependency, so no change is needed.
- Cargo.toml (root) — [workspace.dependencies]:
serde = { version = "1", features = ["derive"] }(see search output line ~17)- certs/Cargo.toml —
serde = { workspace = true }(line 12)- gpbft/src/types.rs — uses
#[derive(Deserialize, Serialize)](line 15) — demonstrates derive is required and availableNo diff required; leaving
serde = { workspace = true }is fine.
8-9: Dependency hygiene — filecoin-f3-gpbft: 'test-utils' feature verified (no change required)cargo metadata shows filecoin-f3-gpbft declares a "test-utils" feature and the workspace uses that feature only from dev-dependencies (the repo also documents this trick in the linter). This is the expected pattern; Cargo will unify features across the single resolved crate, so the feature will be active when running tests/benches that pull in the dev-deps.
Relevant locations (from workspace search):
- gpbft/Cargo.toml — package + feature "test-utils"; dev-dep self-reference enabling the feature (gpbft/Cargo.toml:2,27)
- certs/Cargo.toml — normal dependency: filecoin-f3-gpbft = { path = "../gpbft" } (certs/Cargo.toml:7) and dev-dependency enabling feature: filecoin-f3-gpbft = { path = "../gpbft", features = ["test-utils"] } (certs/Cargo.toml:20)
- lightclient/Cargo.toml — filecoin-f3-gpbft = { path = "../gpbft" } (lightclient/Cargo.toml:12)
- rpc/Cargo.toml — filecoin-f3-gpbft = { path = "../gpbft" } (rpc/Cargo.toml:12)
- blssig/Cargo.toml — filecoin-f3-gpbft = { path = "../gpbft" } (blssig/Cargo.toml:16)
- scripts/linters/find_unused_deps.rb — contains comment acknowledging the intentional "test-utils" trick (scripts/linters/find_unused_deps.rb:18-21)
Conclusion: pattern is intentional and safe for tests/benches; no change required unless you want to avoid enabling the feature during workspace-wide test runs.
rpc/Cargo.toml (1)
10-21: No change required — workspace already enables tokio and jsonrpsee featuresConfirmed: the workspace root Cargo.toml already declares the needed features, so rpc/Cargo.toml's workspace = true will inherit them.
- Cargo.toml (root): tokio = { version = "1", features = ["rt-multi-thread", "macros"] } (rg output line ~18)
- Cargo.toml (root): jsonrpsee = { version = "0.24", features = ["ws-client", "http-client"] } (rg output line ~20)
Remove/ignore the suggested diff — no edits needed in rpc/Cargo.toml.
blssig/Cargo.toml (1)
10-22: Dependencies wiring looks consistent with workspace strategy.Path link to gpbft and workspace-managed crypto/dev deps look good and align with the new verifier module. No issues from this manifest alone.
gpbft/Cargo.toml (1)
10-25: Workspace unification and new merkle dependency look correct.Switching common crates to workspace-managed versions and introducing the merkle local dependency aligns with the new chain key generation approach. Looks good.
blssig/src/lib.rs (2)
4-9: Module documentation is clear and comprehensive.The documentation effectively describes the BLS signature implementation, the specific curve used (BLS12_381 with G1/G2), and mentions the BDN scheme for preventing rogue public-key attacks. This aligns well with the PR objectives.
11-14: LGTM! Clean module structure and appropriate public exports.The module organization with separate
bdnandverifiermodules provides good separation of concerns, and the public exports are appropriately scoped.merkle/src/tests.rs (3)
3-10: LGTM! Clean helper function implementation.The
generate_inputsfunction correctly mimics the go-f3 reference implementation's test helper with proper formatting.
12-18: LGTM! Proper test for empty input case.The test correctly verifies that an empty input yields
ZERO_DIGEST, which is essential for the Merkle tree implementation.
23-54: Golden test implementation looks good.The test correctly validates the Merkle tree implementation against 16 predefined test vectors from go-f3, ensuring compatibility. The error messages are informative and will help with debugging if the test fails.
gpbft/src/chain.rs (4)
78-102: LGTM! Well-documented serialization method with clear byte layout.The
serialize_for_signingmethod provides a deterministic byte sequence that matches the go-f3 reference implementation. The documentation clearly specifies the exact byte layout, and the implementation correctly uses big-endian encoding for the epoch and proper CBOR encoding for the tipset key.
275-286: LGTM! Clean Merkle-based key generation.The updated
key()method correctly uses the new Merkle tree implementation for chain key generation, properly handling the empty chain case by returningZERO_DIGEST. This aligns with the PR objective of replacing the previous incompatible format.
495-508: Test properly validates Merkle tree integration.The renamed test correctly verifies that the chain key is 32 bytes (matching a Merkle digest) and that different chains produce different keys, which are the essential properties for the new key generation method.
524-564: Comprehensive golden test for serialization format.The test thoroughly validates the
serialize_for_signingmethod against the go-f3 reference implementation, checking the exact byte layout including epoch encoding, commitments placement, and CID serialization. This ensures compatibility with the reference implementation.blssig/src/verifier/tests.rs (3)
6-29: Well-structured test helper for BLS signing.The
BLSSignerstruct provides a clean test utility with appropriate encapsulation of the private key and convenient methods for testing signature generation and verification.
31-45: LGTM! Basic signature verification test.The test properly validates the happy path of BLS signature verification, ensuring that a signature created by
BLSSignercan be verified byBLSVerifier.
47-67: Good negative test case for signature verification.The test correctly validates that corrupted signatures fail verification, which is essential for ensuring the security of the BLS implementation.
lightclient/examples/certificates_validation.rs (3)
1-19: LGTM! Clear documentation and appropriate imports.The example provides clear usage instructions and imports the necessary modules for F3 certificate validation.
25-34: Consider using HTTPS URLs consistently for production endpoints.Both endpoints use HTTPS which is good for security. The configuration structure is clean and maintainable.
123-146: LGTM! Clean command-line argument parsing with helpful error messages.The function provides clear error messages and properly validates the network name against available configurations.
merkle/src/lib.rs (1)
1-74: LGTM! Well-structured Merkle tree implementationThe implementation correctly follows the go-f3 reference with proper use of Keccak-256 hashing and leaf/internal node markers. The recursive tree construction is clean and efficient.
rpc/src/lib.rs (1)
1-40: Clean RPC client implementationThe RPC client provides a simple and focused interface for F3 certificate and power table retrieval. The use of typed RPC methods and proper error propagation is good.
lightclient/src/rpc_to_internal.rs (2)
39-54: Good error handling and validationThe chain conversion properly validates non-empty input and provides clear error context. The use of
context()for error messages is helpful for debugging.
14-14: Confirmed — no change requiredgo-f3 encodes []byte as RFC4648 "standard" Base64 (+/ with = padding); Rust's base64::engine::general_purpose::STANDARD matches that, so the decode call is correct.
- File: lightclient/src/rpc_to_internal.rs (around line 14)
- pub_key: PubKey(general_purpose::STANDARD.decode(&val.pub_key)?), // base64 -> bytes -> PubKey
rpc/src/types.rs (1)
80-105: Well-designed stringify module for BigInt serializationThe generic stringify module elegantly handles BigInt serialization/deserialization as strings, which is essential for JSON compatibility with large numbers. The use of trait bounds ensures type safety.
gpbft/src/lib.rs (1)
42-48: Re-exports look coherentThe module reshaping and re-exports provide a cleaner public surface. No functional issues spotted here.
certs/src/lib.rs (2)
428-464: Good: robust signer validation (bounds and zero-effective-power checks)The additional checks around signer indices and scaled power provide precise error reporting and prevent out-of-bounds access.
376-385: Switch to PowerEntries::serialize_cbor is correct and aligns with CID derivationThis change matches the reference behavior and prevents CBOR encoding mismatches when computing the power table CID.
gpbft/src/payload.rs (1)
45-96: serialize_for_signing aligns with go-f3: domain separation and field orderingThe format and preallocation are correct; tests validate length and byte layout.
gpbft/src/powertable.rs (6)
13-13: LGTM: Well-defined power scaling constantThe
MAX_POWERconstant (65535) provides a reasonable upper bound for scaled power values, matching the F3 protocol requirements.
86-115: LGTM: Robust power scaling implementationThe two-pass approach is well-designed:
- First pass validates all powers are positive and calculates total
- Second pass performs scaling and accumulates scaled total
The error handling for non-positive powers is appropriate and the early validation prevents downstream issues.
159-162: LGTM: Correct strong quorum calculationThe strong quorum check
power >= div_ceil(2 * power_total, 3)correctly implements the 2/3 majority requirement using ceiling division.
227-234: LGTM: Comprehensive integration testsThe tests validate end-to-end CBOR serialization against known CID values from manifest files. This ensures compatibility with the reference implementation.
117-144: Verify CBOR serialization matches go-f3 exactlyI couldn't find go-f3's implementation in this repository to compare against, so I can't confirm wire‑level compatibility. The code under review appears to insert a leading zero byte for the big-int power and encodes each entry as [id, bytes(power), bytes(pub_key)].
Files to check / actions:
- gpbft/src/powertable.rs — PowerEntries::serialize_cbor (around lines 117–144): uses entry.power.to_bytes_be().1 then bytes.insert(0, 0).
- gpbft/src/powertable.rs tests — test_calibrationnet_initial_powertable_cid and test_filecoin_initial_powertable_cid: run these unit tests to validate the produced CBOR/CID against the manifests.
- Compare the produced CBOR bytes against go-f3's powertable serialization (big.Int.MarshalCBOR) in the go-f3 repo; if you cannot access go-f3, provide reference CBOR test vectors to confirm exact equivalence.
47-61: Verify scaling formula matches go-f3 referenceI inspected gpbft/src/powertable.rs: the implementation uses BigInt arithmetic and integer division:
- const MAX_POWER: i64 = 0xffff (65535) — gpbft/src/powertable.rs (line ~13)
- scale implementation: let scaled = (&self.power * MAX_POWER) / total; then converted with to_i64().unwrap_or(0) — gpbft/src/powertable.rs (impl PowerEntry::scale_power, lines ~47–61)
- scaled values are produced as i64 (PowerEntries::scaled()) and later stored as Vec on PowerTable — verify the range fits u16 (0..65535).
I could not locate the go-f3 reference powertable implementation (powertable.go) to compare division order, rounding/precision behavior, or whether 65535 is treated as a sentinel. Please either (a) provide a link to the go-f3 implementation or confirm that go-f3 uses the same multiply-then-divide integer division semantics and MAX_POWER=65535; or (b) if go-f3 uses different rounding (ceil/round-to-nearest) or special handling for MAX_POWER, update this implementation to match.
| //! BDN (Boneh-Drijvers-Neven) signature aggregation scheme, for preventing rogue public-key attacks. | ||
| //! | ||
| //! NOTE: currently uses standard BLS aggregation without coefficient weighting, hence returns incorrect values compared to go-f3. | ||
| //! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incomplete BDN implementation - returns incorrect results
The module acknowledges that it's using standard BLS aggregation instead of proper BDN coefficient weighting. This means the signature verification will fail when compared against go-f3's implementation. The PR objectives note this as a known issue with expected signature verification errors being temporarily silenced.
Would you like me to help implement the proper BDN coefficient weighting to match the go-f3 reference implementation?
| // Verify the aggregate signature | ||
| let res = verifier | ||
| .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) | ||
| .map_err(|e| CertsError::SignatureVerificationFailed { | ||
| instance: cert.gpbft_instance, | ||
| error: e.to_string(), | ||
| }); | ||
|
|
||
| // Temporarily silencing verification errors | ||
| // The current BDN implementation uses standard BLS aggregation, causing verification to fail. | ||
| // This logging allows development to continue. | ||
| // TODO: Remove this workaround once BDN is implemented | ||
| if let Err(err) = res { | ||
| println!("WARN: {}", err); | ||
| } | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not ignore signature verification failures in production paths
Swallowing verify_aggregate errors and returning Ok(()) undermines safety. If you must bypass verification temporarily, gate it behind a feature flag and return errors otherwise. Also, prefer structured logging over println!.
Apply this diff to gate the bypass and propagate errors by default:
- // Verify the aggregate signature
- let res = verifier
- .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk)
- .map_err(|e| CertsError::SignatureVerificationFailed {
- instance: cert.gpbft_instance,
- error: e.to_string(),
- });
-
- // Temporarily silencing verification errors
- // The current BDN implementation uses standard BLS aggregation, causing verification to fail.
- // This logging allows development to continue.
- // TODO: Remove this workaround once BDN is implemented
- if let Err(err) = res {
- println!("WARN: {}", err);
- }
-
- Ok(())
+ // Verify the aggregate signature (propagate errors by default).
+ // If you need to bypass during development, enable the "insecure_skip_sig_verify" feature.
+ #[cfg(feature = "insecure_skip_sig_verify")]
+ {
+ if let Err(err) = verifier.verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) {
+ eprintln!("WARN (insecure_skip_sig_verify): {}", err);
+ }
+ Ok(())
+ }
+
+ #[cfg(not(feature = "insecure_skip_sig_verify"))]
+ {
+ verifier
+ .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk)
+ .map_err(|e| CertsError::SignatureVerificationFailed {
+ instance: cert.gpbft_instance,
+ error: e.to_string(),
+ })
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Verify the aggregate signature | |
| let res = verifier | |
| .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) | |
| .map_err(|e| CertsError::SignatureVerificationFailed { | |
| instance: cert.gpbft_instance, | |
| error: e.to_string(), | |
| }); | |
| // Temporarily silencing verification errors | |
| // The current BDN implementation uses standard BLS aggregation, causing verification to fail. | |
| // This logging allows development to continue. | |
| // TODO: Remove this workaround once BDN is implemented | |
| if let Err(err) = res { | |
| println!("WARN: {}", err); | |
| } | |
| Ok(()) | |
| } | |
| // Verify the aggregate signature (propagate errors by default). | |
| // If you need to bypass during development, enable the "insecure_skip_sig_verify" feature. | |
| #[cfg(feature = "insecure_skip_sig_verify")] | |
| { | |
| if let Err(err) = verifier.verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) { | |
| eprintln!("WARN (insecure_skip_sig_verify): {}", err); | |
| } | |
| Ok(()) | |
| } | |
| #[cfg(not(feature = "insecure_skip_sig_verify"))] | |
| { | |
| verifier | |
| .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) | |
| .map_err(|e| CertsError::SignatureVerificationFailed { | |
| instance: cert.gpbft_instance, | |
| error: e.to_string(), | |
| }) | |
| } | |
| } |
# Conflicts: # Cargo.toml # certs/src/error.rs # certs/src/lib.rs # gpbft/Cargo.toml # gpbft/src/lib.rs # gpbft/src/powertable.rs # rust-toolchain.toml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (2)
certs/Cargo.toml (1)
25-25: Action: Fix the mismatch — gate the test_utils module behind the "test-utils" feature or remove the feature and all its references.Verification: gpbft/Cargo.toml defines a feature
test-utilsand other crates enable it, but gpbft/src/lib.rs currently exposespub mod test_utils;unconditionally, so the feature flag is ineffective/confusing.Files to address:
- gpbft/src/lib.rs — currently:
pub mod test_utils;- Suggestion: gate it:
#[cfg(feature = "test-utils")] pub mod test_utils;- gpbft/src/chain.rs — references
crate::test_utils::{create_test_tipset, powertable_cid};— these uses must be gated or moved if you gate the module.- gpbft/Cargo.toml — defines
[features] test-utils = []and includes self/dev references enabling the feature; keep or remove based on chosen approach.- certs/Cargo.toml & certs/src/lib.rs — dev-dep
filecoin-f3-gpbft = { path = "../gpbft", features = ["test-utils"] }andfilecoin_f3_gpbft::test_utils::powertable_cid— update consumers if you remove or gate the feature.- scripts/linters/find_unused_deps.rb — contains a note about enabling
test-utils; update if behavior changes.Recommended choices (pick one):
- Gate the module: add #[cfg(feature = "test-utils")] to the module and gate internal uses (or conditionally compile them), keep the feature in Cargo.toml, and ensure dependent crates enable the feature where needed.
- Remove the feature: delete the
test-utilsfeature and all references (Cargo.toml entries and dependent feature enables) if the module should remain public always.certs/src/lib.rs (1)
688-724: Tests will panic due to todo!(); implement no-op MockVerifierThe current MockVerifier methods are todo!(), which will panic if called. Implement no-op stubs to avoid panics and make tests deterministic.
impl Verifier for MockVerifier { type Error = MockVerifierError; fn verify( &self, pub_key: &PubKey, msg: &[u8], sig: &[u8], ) -> std::result::Result<(), Self::Error> { - todo!() + let _ = (pub_key, msg, sig); + Ok(()) } fn aggregate( &self, pub_keys: &[PubKey], sigs: &[Vec<u8>], ) -> std::result::Result<Vec<u8>, Self::Error> { - todo!() + let _ = (pub_keys, sigs); + Ok(Vec::new()) } fn verify_aggregate( &self, payload: &[u8], agg_sig: &[u8], signers: &[PubKey], ) -> std::result::Result<(), Self::Error> { - todo!() + let _ = (payload, agg_sig, signers); + Ok(()) } }
♻️ Duplicate comments (6)
certs/src/lib.rs (1)
302-319: Do not ignore signature verification failures; gate bypass behind a featureSilencing signature verification errors undermines safety and can mask real failures. If you need a temporary bypass for development while BDN is pending, gate it behind a feature and propagate errors by default. Also avoid println! for logs.
Apply this diff:
- // Verify the aggregate signature - let res = verifier - .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) - .map_err(|e| CertsError::SignatureVerificationFailed { - instance: cert.gpbft_instance, - error: e.to_string(), - }); - - // Temporarily silencing verification errors - // The current BDN implementation uses standard BLS aggregation, causing verification to fail. - // This logging allows development to continue. - // TODO: Remove this workaround once BDN is implemented - if let Err(err) = res { - println!("WARN: {}", err); - } - - Ok(()) + // Verify the aggregate signature (propagate errors by default). + // If you need to bypass during development, enable the "insecure_skip_sig_verify" feature. + #[cfg(feature = "insecure_skip_sig_verify")] + { + if let Err(err) = verifier.verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) { + eprintln!("WARN (insecure_skip_sig_verify): {}", err); + } + Ok(()) + } + #[cfg(not(feature = "insecure_skip_sig_verify"))] + { + verifier + .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) + .map_err(|e| CertsError::SignatureVerificationFailed { + instance: cert.gpbft_instance, + error: e.to_string(), + }) + }gpbft/src/powertable.rs (2)
116-143: CBOR: leading-zero semantics and panic on serialization
- The added leading zero in
power_bytescontradicts canonical big-integer encodings, but you're encoding as a byte string, not CBOR bignum. Given your CID calibration tests, this likely mirrors go-f3’s reference bytes. Keep it if it’s necessary for cross-impl compatibility, but please document the rationale with a direct reference to the go-f3 implementation.unwrap()onto_veccan panic. Prefer returning aResultto callers instead of panicking inside a library method.- pub fn serialize_cbor(&self) -> Vec<u8> { + pub fn serialize_cbor(&self) -> anyhow::Result<Vec<u8>> { use serde_cbor::Value; let cbor_entries: Vec<Value> = self .0 .iter() .map(|entry| { // Convert StoragePower to bytes like Go's big.Int.MarshalCBOR // Go adds a leading zero byte to the big-endian representation let power_bytes = { let mut bytes = entry.power.to_bytes_be().1; bytes.insert(0, 0); bytes }; Value::Array(vec![ Value::Integer(entry.id as i128), Value::Bytes(power_bytes), Value::Bytes(entry.pub_key.0.clone()), ]) }) .collect(); let cbor_array = Value::Array(cbor_entries); - serde_cbor::to_vec(&cbor_array).unwrap() + serde_cbor::to_vec(&cbor_array).map_err(|e| anyhow!("CBOR serialization failed: {e}")) }Note: If you adopt this signature change, update call sites and tests to handle
Result. If you choose to keep the current signature, at least replaceunwrap()with anexpect("...")carrying context.
145-156: Bounds-check signer indices instead of silently skippingSilently skipping OOB indices can hide bugs. Return an error when an index is invalid; optionally also use checked addition for safety.
pub fn signer_scaled_total(scaled_powers: &[i64], signer_indices: &[u64]) -> anyhow::Result<i64> { let mut signer_total = 0; for &index in signer_indices { let idx = index as usize; - if idx < scaled_powers.len() { - signer_total += scaled_powers[idx]; - } + if idx >= scaled_powers.len() { + return Err(anyhow!( + "signer index {} out of bounds (max: {})", + idx, + scaled_powers.len().saturating_sub(1) + )); + } + signer_total = signer_total + .checked_add(scaled_powers[idx]) + .ok_or_else(|| anyhow!("signer total overflow"))?; } Ok(signer_total) }blssig/src/bdn/mod.rs (1)
1-4: Known issue: Incomplete BDN implementationThe implementation acknowledges using standard BLS aggregation instead of proper BDN coefficient weighting, which will cause signature verification failures when compared against go-f3's implementation.
blssig/src/verifier/mod.rs (2)
89-93: Handle RwLock poisoning gracefully and fix potential Copy requirement issueUsing
unwrap()on RwLock operations can cause panics if the lock is poisoned. Additionally, returning*cached_keyrequiresPublicKeyto implementCopy, which may not be the case.Apply this diff to handle lock errors gracefully:
- let cache = self.point_cache.read().unwrap(); - if let Some(cached_key) = cache.get(pub_key_bytes) { - return Ok(*cached_key); - } + let cache = self.point_cache.read() + .map_err(|_| BLSError::CachePoisoned)?; + if let Some(cached_key) = cache.get(pub_key_bytes) { + return Ok(cached_key.clone()); + }Note: This requires adding a new
CachePoisonedvariant toBLSErrorand ensuringPublicKeyimplementsClone.
101-106: Handle write lock poisoning for cache updatesSimilar to the read lock, the write lock should handle poisoning gracefully.
Apply this diff:
- let mut cache = self.point_cache.write().unwrap(); - if cache.len() >= MAX_POINT_CACHE_SIZE { - cache.clear(); - } - cache.insert(pub_key_bytes.to_vec(), pub_key); + if let Ok(mut cache) = self.point_cache.write() { + if cache.len() >= MAX_POINT_CACHE_SIZE { + cache.clear(); + } + cache.insert(pub_key_bytes.to_vec(), pub_key); + } + // Silently ignore cache update failures as they don't affect correctness
🧹 Nitpick comments (12)
gpbft/src/lib.rs (1)
29-34: Consider gating test_utils to reduce public API surfaceUnless intentional, shipping test utilities as a public, unconditional module can invite accidental use by downstream crates. Suggest gating behind a feature (and/or tests).
Apply this diff:
-pub mod test_utils; +#[cfg(any(test, feature = "test-utils"))] +pub mod test_utils;gpbft/src/payload.rs (1)
86-90: Prefer H256::as_bytes() for consistencyMinor: you compute commitment length via as_bytes() but write via the .0 field. Use as_bytes() in both places for consistency and readability.
- buf.extend_from_slice(&self.supplemental_data.commitments.0); + buf.extend_from_slice(self.supplemental_data.commitments.as_bytes());certs/src/lib.rs (2)
69-69: Avoid Default on FinalityCertificate to prevent invalid state constructionDeriving Default creates an invalid certificate (empty ECChain, empty signers) that can be accidentally constructed in production code. Prefer explicit construction in tests via helpers.
-#[derive(Default, Debug, Clone)] +#[derive(Debug, Clone)] pub struct FinalityCertificate {If you still need a test-only default, consider a #[cfg(test)] helper like fn empty_cert_for_tests() -> FinalityCertificate.
344-353: Remove #[allow(unused)] if no longer neededIf validate_finality_certificates is used by examples/lightclient paths, the allow(unused) can be dropped to keep linting strict.
gpbft/src/powertable.rs (5)
12-12: Preferu16::MAXfor clarity and type intentUsing
u16::MAX(cast as needed) communicates intent and avoids magic numbers.-const MAX_POWER: i64 = 0xffff; // = 65535 +const MAX_POWER: i64 = u16::MAX as i64; // 65535
46-59: Avoid silent fallback to 0; guard against zero total
- Returning
0onto_i64()failure can mask bugs. Given your math ensuresscaled <= MAX_POWER, prefer returning an error if conversion fails.- Also guard
total <= 0to avoid a potential division-by-zero if this method is called outsidescaled().pub fn scale_power(&self, total: &StoragePower) -> anyhow::Result<i64> { - if total < &self.power { + if total <= &NumZero::zero() { + return Err(anyhow!("total power must be > 0")); + } + if total < &self.power { return Err(anyhow!( "total power {} is less than the power of a single participant {}", total, self.power )); } - let scaled = (&self.power * MAX_POWER) / total; - Ok(scaled.to_i64().unwrap_or(0)) + let scaled = (&self.power * MAX_POWER) / total; + Ok(scaled + .to_i64() + .ok_or_else(|| anyhow!("scaled power doesn't fit in i64"))?) }
85-114: Optional: consider preserving total via apportioning the rounding remainderThe per-entry floor division will make
scaled_total <= MAX_POWER. If downstream relies onscaled_totalbeing close toMAX_POWER, consider distributing the remainder to the largest fractional remainders (Hamilton/LRM method) to keep totals tighter while preserving proportionality. Not critical if your quorum checks only rely on relative sums.
158-171: Avoid potential overflow in2 * power_totaland improve robustnessEven if current callers use small scaled totals, making this overflow-safe is cheap. Use saturating multiply before division. Also, the
b == 0branch is not applicable here; consider asserting non-zerob.pub fn is_strong_quorum(power: i64, power_total: i64) -> bool { - power >= div_ceil(2 * power_total, 3) + let two_total = power_total.saturating_mul(2); + power >= div_ceil(two_total, 3) } /// Integer division with ceiling (rounds up) fn div_ceil(a: i64, b: i64) -> i64 { - if b == 0 { - return 0; - } + debug_assert!(b != 0, "div_ceil divisor must be non-zero"); let quo = a / b; let rem = a % b; - if rem != 0 { quo + 1 } else { quo } + if rem != 0 { quo + 1 } else { quo } }
240-271: Parsing power as u64 may overflow on real datasetsReal network power values can exceed
u64. Consider parsing as big integers to avoid truncation/overflow, then converting toStoragePower.- let power_str = entry_json["Power"].as_str().expect("invalid Power"); - let power = power_str.parse::<u64>().expect("failed to parse Power"); + let power_str = entry_json["Power"].as_str().expect("invalid Power"); + let power = num_bigint::BigUint::parse_bytes(power_str.as_bytes(), 10) + .expect("failed to parse Power"); let pubkey_b64 = entry_json["PubKey"].as_str().expect("invalid PubKey"); let pubkey_bytes = base64::prelude::BASE64_STANDARD .decode(pubkey_b64) .expect("failed to decode base64 public key"); entries.push(PowerEntry { id, - power: StoragePower::from(power), + power: StoragePower::from(power), pub_key: PubKey(pubkey_bytes), });If
StoragePowerdoesn’t implementFrom<BigUint>, add a constructor or helper inStoragePowerfor decimal strings orBigUint. Also add:
use num_bigint::BigUint;at the top of the tests module (or locally in the function).blssig/src/bdn/mod.rs (1)
39-41: Improve error context for signature deserialization failuresThe current error mapping loses the original error details that could be helpful for debugging.
Consider enhancing the error to include the original error message:
- let signature = - Signature::from_bytes(sig_bytes).map_err(|_| BLSError::SignatureDeserialization)?; + let signature = Signature::from_bytes(sig_bytes) + .map_err(|e| BLSError::SignatureDeserialization(format!("Failed to deserialize signature: {}", e)))?;Note: This would require updating the
BLSError::SignatureDeserializationvariant to accept a String parameter.blssig/src/verifier/mod.rs (2)
102-104: Consider using an LRU cache instead of clearing the entire cacheClearing the entire cache when it reaches the size limit is inefficient and can cause performance degradation as all cached keys need to be re-deserialized.
Consider using an LRU (Least Recently Used) cache implementation that evicts the least recently accessed entries instead of clearing everything. This would maintain better cache hit rates and more predictable performance. The
lrucrate provides a good implementation for this purpose.
62-67: Consider defining constants for BLS key and signature sizesMagic numbers for key (48) and signature (96) sizes are repeated throughout the code. Defining constants would improve maintainability.
Add these constants at the module level:
+/// BLS public key size in bytes (G1 point compressed) +const BLS_PUBLIC_KEY_SIZE: usize = 48; + +/// BLS signature size in bytes (G2 point compressed) +const BLS_SIGNATURE_SIZE: usize = 96; + impl BLSVerifier {Then update the validation checks:
- if pub_key.0.len() != 48 { + if pub_key.0.len() != BLS_PUBLIC_KEY_SIZE { return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len())); } - if sig.len() != 96 { + if sig.len() != BLS_SIGNATURE_SIZE { return Err(BLSError::InvalidSignatureLength(sig.len())); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (16)
Cargo.toml(2 hunks)blssig/Cargo.toml(1 hunks)blssig/src/bdn/mod.rs(1 hunks)blssig/src/verifier/mod.rs(1 hunks)blssig/src/verifier/tests.rs(1 hunks)certs/Cargo.toml(1 hunks)certs/src/error.rs(1 hunks)certs/src/lib.rs(17 hunks)gpbft/Cargo.toml(1 hunks)gpbft/src/chain.rs(5 hunks)gpbft/src/lib.rs(1 hunks)gpbft/src/payload.rs(1 hunks)gpbft/src/powertable.rs(5 hunks)gpbft/src/test_utils.rs(1 hunks)lightclient/examples/certificates_validation.rs(1 hunks)lightclient/src/rpc_to_internal.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- certs/src/error.rs
- gpbft/src/chain.rs
- blssig/Cargo.toml
- blssig/src/verifier/tests.rs
- lightclient/src/rpc_to_internal.rs
- gpbft/Cargo.toml
- lightclient/examples/certificates_validation.rs
- Cargo.toml
🧰 Additional context used
🧬 Code Graph Analysis (6)
blssig/src/verifier/mod.rs (4)
blssig/src/verifier/tests.rs (1)
new(13-19)blssig/src/bdn/mod.rs (1)
new(16-24)lightclient/src/lib.rs (1)
new(26-32)rpc/src/lib.rs (1)
new(14-17)
blssig/src/bdn/mod.rs (2)
blssig/src/verifier/tests.rs (2)
new(13-19)public_key(21-23)blssig/src/verifier/mod.rs (1)
new(53-57)
gpbft/src/powertable.rs (2)
gpbft/src/chain.rs (2)
cid_from_bytes(286-289)new(147-153)gpbft/src/types.rs (1)
new(20-22)
gpbft/src/payload.rs (1)
gpbft/src/chain.rs (4)
new(147-153)serialize_for_signing(84-102)cid_from_bytes(286-289)new_unvalidated(156-158)
certs/src/lib.rs (3)
gpbft/src/chain.rs (2)
cid_from_bytes(286-289)new_unvalidated(156-158)gpbft/src/powertable.rs (2)
is_strong_quorum(159-161)signer_scaled_total(146-156)lightclient/src/lib.rs (1)
power_table(36-39)
gpbft/src/lib.rs (2)
gpbft/src/chain.rs (1)
cid_from_bytes(286-289)gpbft/src/powertable.rs (1)
is_strong_quorum(159-161)
🔇 Additional comments (8)
certs/Cargo.toml (1)
13-21: LGTM: workspace deps + blssig wiring look correctAdding filecoin-f3-blssig and moving serde/num/hex/ahash to workspace is consistent with the rest of the PR and enables the new BLS + CBOR paths.
gpbft/src/test_utils.rs (1)
26-30: Good move: use PowerEntries::serialize_cbor() for CID derivationSwitching to the crate’s CBOR serialization path removes the unwrap and ensures CID derivation matches go-f3.
gpbft/src/lib.rs (1)
42-48: Re-exports are cohesive and simplify downstream importsConsolidating Tipset/cid_from_bytes and core types (Payload, Justification, power table helpers) improves ergonomics across the workspace.
gpbft/src/payload.rs (2)
45-96: serialize_for_signing matches the go-f3 layout; tests exercise key invariantsThe domain separation, field order, and fixed sizes (phase:1, round:8, instance:8, commitments:32, chain key:32, power table CID) align with the reference. Nice capacity preallocation.
198-240: Test vector parity with go-f3 looks correctThe assertions on prefix, phase byte, BE encoding of round/instance, commitments, chain key for empty ECChain, and trailing CID match expectations.
certs/src/lib.rs (3)
37-38: Public Result alias is appropriateThis aligns the crate’s error surface and simplifies downstream usage.
269-283: Quorum check before signature verification: good validation orderComputing scaled power and enforcing strong quorum prior to signature verification reduces unnecessary crypto work and surfaces clear errors early.
399-420: ECChain extension path—double-check semantics of extend()Ensure ECChain::extend returns an Option/Result matching this match arm (Some/None), and that it appends only the suffix as intended. A focused test with two certificates where the second has a non-empty suffix would be helpful.
Would you like me to add a unit test that constructs two certs with a shared base and verifies the final chain length and content?
| /// Aggregates signatures using standard BLS aggregation | ||
| /// TODO: Implement proper BDN coefficient weighting | ||
| pub fn aggregate_sigs(&self, sigs: &[Vec<u8>]) -> Result<Vec<u8>, BLSError> { | ||
| if sigs.len() != self.pub_keys.len() { | ||
| return Err(BLSError::LengthMismatch { | ||
| pub_keys: self.pub_keys.len(), | ||
| sigs: sigs.len(), | ||
| }); | ||
| } | ||
|
|
||
| let mut aggregated_point = G2Projective::identity(); | ||
|
|
||
| for sig_bytes in sigs.iter() { | ||
| // Deserialize signature to G2 point | ||
| let signature = | ||
| Signature::from_bytes(sig_bytes).map_err(|_| BLSError::SignatureDeserialization)?; | ||
| let sig_point: G2Projective = signature.into(); | ||
|
|
||
| // Standard BLS aggregation | ||
| aggregated_point += sig_point; | ||
| } | ||
|
|
||
| // Convert back to signature | ||
| let aggregated_sig: Signature = aggregated_point.into(); | ||
| Ok(aggregated_sig.as_bytes().to_vec()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding coefficient weighting implementation for proper BDN aggregation
The current implementation performs standard BLS aggregation without the BDN coefficient weighting required for rogue public-key attack prevention. This is a critical security feature that should be prioritized.
The proper BDN implementation requires computing coefficient weights based on the hash of each public key relative to all signers. Would you like me to help implement the proper BDN coefficient weighting to match the go-f3 reference implementation? I can open an issue to track this critical security enhancement.
| fn verify_aggregate( | ||
| &self, | ||
| payload: &[u8], | ||
| agg_sig: &[u8], | ||
| signers: &[PubKey], | ||
| ) -> Result<(), Self::Error> { | ||
| if signers.is_empty() { | ||
| return Err(BLSError::EmptyPublicKeys); | ||
| } | ||
|
|
||
| for pub_key in signers { | ||
| if pub_key.0.len() != 48 { | ||
| return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len())); | ||
| } | ||
| } | ||
|
|
||
| let bdn = BDNAggregation::new(signers)?; | ||
| let agg_pub_key = bdn.aggregate_pub_keys()?; | ||
| let agg_pub_key_bytes = PubKey(agg_pub_key.as_bytes().to_vec()); | ||
| self.verify_single(&agg_pub_key_bytes, payload, agg_sig) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the verify_aggregate implementation matches go-f3 expectations
The implementation creates a BDNAggregation instance and uses it to aggregate public keys, then verifies the aggregated signature. Given that BDN aggregation is incomplete (missing coefficient weighting), this verification will likely fail against go-f3 generated signatures.
🏁 Script executed:
#!/bin/bash
# Check if there are any tests or examples that demonstrate compatibility with go-f3
echo "Searching for tests that verify compatibility with go-f3 signatures..."
# Look for test files that might contain go-f3 compatibility tests
fd -e rs test | xargs rg -l "go.f3|go-f3" -i
# Check for any test vectors or known signatures from go-f3
rg -n "test.*vector|known.*signature|reference.*signature" --type rust -C 3
# Look for any TODO or FIXME comments about BDN or go-f3 compatibility
rg -n "TODO|FIXME|HACK|XXX" --type rust -C 2 | rg -B2 -A2 "BDN|go.f3|coefficient"Length of output: 2748
Fix BDN aggregation weighting — verify_aggregate is incompatible with go-f3 signatures
verify_aggregate calls BDNAggregation::aggregate_pub_keys() and then verifies the aggregated key/signature, but the repo shows BDNAggregation currently uses plain BLS aggregation (no coefficient weighting). That will cause verification to fail against go-f3-generated aggregate signatures.
Files to fix / inspect:
- blssig/src/verifier/mod.rs (fn verify_aggregate — lines ~144–164): uses BDNAggregation::aggregate_pub_keys() and verify_single().
- blssig/src/bdn/mod.rs: contains TODOs — "TODO: Implement proper BDN coefficient weighting" for aggregate_pub_keys and aggregate_sigs.
- certs/src/lib.rs (lines ~311–314): contains a workaround/warning for the current BDN behaviour.
- Tests referencing go-f3 vectors: merkle/src/tests.rs, gpbft/src/payload.rs, gpbft/src/chain.rs — these will break until BDN weighting matches go-f3.
Recommended fix:
- Implement proper BDN coefficient weighting in blssig/src/bdn/mod.rs for both public-key and signature aggregation.
- Ensure verify_aggregate uses the weighted aggregation and add/enable compatibility tests using the existing go-f3 test vectors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
certs/src/lib.rs (1)
688-724: Test double panics: MockVerifier methods are todo!() but are called
verify_signatureinvokesverify_aggregate;todo!()will panic and fail tests. Implement no-op returns for the mock to keep tests meaningful until real verification is wired.impl Verifier for MockVerifier { type Error = MockVerifierError; @@ - ) -> std::result::Result<(), Self::Error> { - todo!() - } + ) -> std::result::Result<(), Self::Error> { Ok(()) } @@ - ) -> std::result::Result<Vec<u8>, Self::Error> { - todo!() - } + ) -> std::result::Result<Vec<u8>, Self::Error> { Ok(Vec::new()) } @@ - ) -> std::result::Result<(), Self::Error> { - todo!() - } + ) -> std::result::Result<(), Self::Error> { Ok(()) } }
♻️ Duplicate comments (3)
gpbft/src/powertable.rs (2)
227-243: Make test fixture paths robust to workspace/test CWDAvoid
../assumptions; derive paths from CARGO_MANIFEST_DIR socargo testin workspaces is stable.- let powertable = load_powertable("../f3initialpowertable_calibrationnet.json"); + let powertable_path = std::path::Path::new(env!("CARGO_MANIFEST_DIR")) + .join("f3initialpowertable_calibrationnet.json"); + let powertable = load_powertable(powertable_path.to_str().unwrap()); @@ - let expected_cid = load_cid_from_manifest("../f3manifest_calibrationnet.json"); + let manifest_path = std::path::Path::new(env!("CARGO_MANIFEST_DIR")) + .join("f3manifest_calibrationnet.json"); + let expected_cid = load_cid_from_manifest(manifest_path.to_str().unwrap());
236-243: Repeat: path-resilient fixture loading for Filecoin testSame fix as above for the Filecoin fixtures.
- let powertable = load_powertable("../f3initialpowertable_filecoin.json"); + let powertable_path = std::path::Path::new(env!("CARGO_MANIFEST_DIR")) + .join("f3initialpowertable_filecoin.json"); + let powertable = load_powertable(powertable_path.to_str().unwrap()); @@ - let expected_cid = load_cid_from_manifest("../f3manifest_filecoin.json"); + let manifest_path = std::path::Path::new(env!("CARGO_MANIFEST_DIR")) + .join("f3manifest_filecoin.json"); + let expected_cid = load_cid_from_manifest(manifest_path.to_str().unwrap());certs/src/lib.rs (1)
302-317: Do not swallow signature verification failures by default; gate with a feature flagIgnoring
verify_aggregateerrors undermines safety. If you must bypass pending BDN work, hide it behind an explicit feature and propagate errors otherwise.- let res = verifier - .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) - .map_err(|e| CertsError::SignatureVerificationFailed { - instance: cert.gpbft_instance, - error: e.to_string(), - }); - - // Temporarily silencing verification errors - // The current BDN implementation uses standard BLS aggregation, causing verification to fail. - // This logging allows development to continue. - // TODO: Remove this workaround once BDN is implemented - if let Err(err) = res { - println!("WARN: {}", err); - } - - Ok(()) + // Verify the aggregate signature (propagate by default). + // Enable feature "insecure_skip_sig_verify" to bypass during development. + #[cfg(feature = "insecure_skip_sig_verify")] + { + if let Err(err) = verifier.verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) { + eprintln!("WARN (insecure_skip_sig_verify): {}", err); + } + Ok(()) + } + #[cfg(not(feature = "insecure_skip_sig_verify"))] + { + verifier + .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) + .map_err(|e| CertsError::SignatureVerificationFailed { + instance: cert.gpbft_instance, + error: e.to_string(), + }) + }
🧹 Nitpick comments (5)
gpbft/src/api.rs (1)
7-11: Encode object-safety intent: add 'static bound to Error
std::error::Erroralready implies Display, so that's covered. Adding'staticto the error type improves ergonomics and avoids lifetime issues when boxing or crossing threads.-pub trait Verifier: Send + Sync { +pub trait Verifier: Send + Sync { @@ - type Error: std::error::Error + Send + Sync; + type Error: std::error::Error + Send + Sync + 'static;gpbft/src/powertable.rs (2)
57-59: Avoid silently swallowing impossible conversion cases
to_i64().unwrap_or(0)hides impossible/logic errors. Given the scale is bounded by MAX_POWER (65535), conversion should never fail. Preferexpectto catch unexpected regressions.- Ok(scaled.to_i64().unwrap_or(0)) + Ok(scaled.to_i64().expect("scaled power should fit in i64"))
163-176: Guard against overflow in quorum calculation and define b=0 semantics
2 * power_totalcan theoretically overflow i64. Also, returning 0 ondiv_ceil(_, 0)can hide bugs. Use checked/saturating arithmetic and make zero denom explicit.pub fn is_strong_quorum(power: i64, power_total: i64) -> bool { - power >= div_ceil(2 * power_total, 3) + let two_total = power_total.saturating_mul(2); + power >= div_ceil(two_total, 3) } /// Integer division with ceiling (rounds up) fn div_ceil(a: i64, b: i64) -> i64 { - if b == 0 { - return 0; - } + // Division by zero shouldn't occur; treat as "not a quorum" + if b == 0 { return i64::MAX; } let quo = a / b; let rem = a % b; if rem != 0 { quo + 1 } else { quo } }gpbft/src/chain.rs (2)
84-102: Minor: replace unwraps with expect for clearer failure contextSerialization failures here are unexpected; using
expectwill yield clearer diagnostics if they ever occur.- let cbor_bytes = serde_cbor::to_vec(&cbor_value).unwrap(); + let cbor_bytes = serde_cbor::to_vec(&cbor_value) + .expect("CBOR-encode tipset key for signing"); @@ - buf.extend_from_slice(&tipset_cid.to_bytes()); - buf.extend_from_slice(&self.power_table.to_bytes()); + buf.extend_from_slice(&tipset_cid.to_bytes()); + buf.extend_from_slice(&self.power_table.to_bytes());
270-282: Propagate Merkle errors instead of unwrapping (or use expect with reason)
filecoin_f3_merkle::treereturns a Result; unwrap will panic on unexpected errors. Either propagate or useexpectwith context. If you keep it infallible by contract, add a comment.- let merkle_root = filecoin_f3_merkle::tree(&values).unwrap(); + let merkle_root = filecoin_f3_merkle::tree(&values) + .expect("merkle tree construction should not fail for finite list of tipsets");
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (11)
blssig/src/verifier/mod.rs(1 hunks)certs/src/error.rs(1 hunks)certs/src/lib.rs(17 hunks)gpbft/Cargo.toml(1 hunks)gpbft/src/api.rs(1 hunks)gpbft/src/chain.rs(5 hunks)gpbft/src/powertable.rs(5 hunks)lightclient/src/lib.rs(1 hunks)merkle/Cargo.toml(1 hunks)merkle/src/lib.rs(1 hunks)merkle/src/tests.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- merkle/Cargo.toml
- certs/src/error.rs
- blssig/src/verifier/mod.rs
- lightclient/src/lib.rs
- merkle/src/tests.rs
- gpbft/Cargo.toml
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-15T10:38:31.498Z
Learnt from: moshababo
PR: ChainSafe/rust-f3#27
File: gpbft/src/powertable.rs:124-130
Timestamp: 2025-08-15T10:38:31.498Z
Learning: In the F3 GPBFT implementation's PowerEntries CBOR serialization, a leading zero byte is intentionally added to the big-endian representation of StoragePower values (via bytes.insert(0, 0)) for compatibility with the Go F3 reference implementation. This is validated by tests like test_filecoin_initial_powertable_cid which compare generated CIDs against expected values from manifest files. Removing this leading zero would break cross-implementation compatibility.
Applied to files:
gpbft/src/powertable.rs
📚 Learning: 2025-08-15T10:20:43.033Z
Learnt from: moshababo
PR: ChainSafe/rust-f3#27
File: certs/src/error.rs:0-0
Timestamp: 2025-08-15T10:20:43.033Z
Learning: In the F3 GPBFT implementation, power values are scaled to a bounded range (0..MAX_POWER where MAX_POWER = 65535) using the power table scaling system. Scaled power values are represented as i64 and used in quorum calculations and error reporting. The InsufficientPower error variant correctly uses i64 for signer_power and total_power since these represent scaled values, not the original StoragePower BigInt values.
Applied to files:
gpbft/src/powertable.rscerts/src/lib.rs
🧬 Code Graph Analysis (3)
gpbft/src/powertable.rs (1)
gpbft/src/chain.rs (2)
cid_from_bytes(286-289)new(147-153)
certs/src/lib.rs (2)
gpbft/src/chain.rs (2)
cid_from_bytes(286-289)new_unvalidated(156-158)gpbft/src/powertable.rs (2)
is_strong_quorum(164-166)signer_scaled_total(146-161)
gpbft/src/chain.rs (3)
gpbft/src/payload.rs (1)
new(29-43)merkle/src/lib.rs (1)
tree(23-30)gpbft/src/test_utils.rs (1)
create_test_tipset(8-16)
🔇 Additional comments (6)
gpbft/src/powertable.rs (1)
116-143: Keep the leading-zero byte — required for cross-impl compatibilityThe inserted 0x00 prefix for big-endian power bytes matches the Go reference and is validated by CID tests. Good to keep.
gpbft/src/chain.rs (1)
520-560: Great test vector coverage for signing layoutThe test mirrors go-f3 expectations and asserts exact byte layout and lengths. This is valuable for cross-compatibility.
merkle/src/lib.rs (2)
32-40: Portable depth calculation: good fixUsing
usize::BITSremoves the 64-bit assumption. Looks solid.
42-75: Hasher reuse across recursion is correct hereUse of
finalize_resetbefore each return ensures the shared hasher's state is clean between subtree computations. Logic matches the intended leaf/internal markers and ZERO_DIGEST padding for empty branches.certs/src/lib.rs (2)
381-383: Signature verification invoked before power-table CID check — correct orderingVerifying quorum and signature prior to trusting power table deltas is the right sequence. Good.
269-272: CertsError::SerializationError exists — no action requiredVerified: certs/src/error.rs defines
SerializationError(Cow<'static, str>)(line 37), and the mapping in certs/src/lib.rs (lines 269–271) is valid.
- certs/src/error.rs:37 —
SerializationError(Cow<'static, str>)- certs/src/lib.rs:269–271 —
.scaled().map_err(|e| CertsError::SerializationError(e.to_string().into()))?;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lightclient/Cargo.toml (1)
9-23: Ensure Tokio enablesnetandtimefor JSON-RPC over HTTP/WSOur workspace’s root
Cargo.tomlcurrently only enablesrt-multi-threadandmacroson Tokio, but thejsonrpsee = { version = "0.24", features = ["ws-client", "http-client"] }clients rely ontokio::netfor networking andtokio::timefor timeout handling. Without these features, RPC calls will fail to compile.• File to update:
Cargo.toml(workspace root)
• Section:[workspace.dependencies]
• Change required:[workspace.dependencies] - tokio = { version = "1", features = ["rt-multi-thread", "macros"] } + tokio = { version = "1", features = ["rt-multi-thread", "macros", "net", "time"] }After applying this, please verify that CI builds succeed for the lightclient and RPC crates.
♻️ Duplicate comments (2)
blssig/src/verifier/mod.rs (1)
144-164: BDN weighting likely incomplete — verify_aggregate may not match go-f3 aggregates yet.verify_aggregate builds an aggregated public key via BDNAggregation::aggregate_pub_keys() and then verifies against agg_sig. If BDNAggregation still uses plain aggregation without BDN coefficients, verification against go-f3-generated signatures will fail. This aligns with the PR note that BDN is pending and errors are currently tolerated upstream.
Action:
- Either wire proper BDN coefficient weighting into BDNAggregation (pub keys and sigs), or
- Keep verify_aggregate gated/feature-flagged with a clear error explaining the gap to avoid silent confusion.
I can help port the weighting step from go-f3 once you confirm the desired variant (basic, message augmentation, or proof-of-possession).
certs/src/lib.rs (1)
302-319: Do not ignore signature verification failures (gate the bypass behind a feature flag)Swallowing
verify_aggregatefailures and returningOk(())undermines security of the light client. If you must temporarily bypass due to missing BDN, gate it behind a clearly named feature and propagate errors by default. Also replaceprintln!with structured logging.- // Verify the aggregate signature - let res = verifier - .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) - .map_err(|e| CertsError::SignatureVerificationFailed { - instance: cert.gpbft_instance, - error: e.to_string(), - }); - - // Temporarily silencing verification errors - // The current BDN implementation uses standard BLS aggregation, causing verification to fail. - // This logging allows development to continue. - // TODO: Remove this workaround once BDN is implemented - if let Err(err) = res { - println!("WARN: {}", err); - } - - Ok(()) + // Verify the aggregate signature (propagate errors by default). + #[cfg(feature = "insecure_skip_sig_verify")] + { + if let Err(err) = verifier.verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) { + tracing::warn!(target:"certs", %err, instance=%cert.gpbft_instance, "aggregate signature verification failed (insecure_skip_sig_verify enabled)"); + } + Ok(()) + } + #[cfg(not(feature = "insecure_skip_sig_verify"))] + { + verifier + .verify_aggregate(&payload_bytes, &cert.signature, &signers_pk) + .map_err(|e| CertsError::SignatureVerificationFailed { + instance: cert.gpbft_instance, + error: e.to_string(), + }) + }Follow-up:
- Add
tracingto the workspace (see Cargo.toml review).- Define the
insecure_skip_sig_verifyfeature in certs/Cargo.toml and document it as unsafe for production.
🧹 Nitpick comments (12)
blssig/src/verifier/mod.rs (5)
49-51: Make cache size configurable and consider LRU instead of full clear.A hardcoded cap with clear() can cause cache thrash under load profiles slightly above 10k unique keys. Prefer an LRU (e.g., lru or moka) or at least a configurable cap (env/cargo feature).
-const MAX_POINT_CACHE_SIZE: usize = 10_000; +// Consider making this configurable (env var or feature) and/or switching to an LRU cache. +const MAX_POINT_CACHE_SIZE: usize = 10_000;
119-142: Empty-input error variant is slightly misleading; consider a dedicated error for empty signatures.aggregate() returns EmptyPublicKeys when either list is empty. That’s accurate for pub_keys but ambiguous when only sigs is empty. Consider adding EmptySignatures (or EmptyInputs) for clarity, or include which side was empty in the error.
- if pub_keys.is_empty() || sigs.is_empty() { - return Err(BLSError::EmptyPublicKeys); - } + if pub_keys.is_empty() { + return Err(BLSError::EmptyPublicKeys); + } + if sigs.is_empty() { + // Option A: introduce a new variant + // return Err(BLSError::EmptySignatures); + // Option B: reuse LengthMismatch with explicit zero + return Err(BLSError::LengthMismatch { pub_keys: pub_keys.len(), sigs: 0 }); + }
39-41: Optional: consider DoS-resistant hashmap for untrusted inputs.ahash is fast but not collision-resistant. If pub_key bytes can be adversarial, consider std::collections::HashMap (SipHash) or aahash with randomized seeds per map (already default) plus small MAX_POINT_CACHE_SIZE mitigates risk. Low priority, just flagging.
59-83: Minor: avoid small heap allocations in single-verify path.Not critical, but you can avoid allocating two Vecs per call by using array/slice literals if PublicKey is Copy. If not Copy, current approach is fine.
- let messages = vec![msg]; - let public_keys = vec![pub_key]; - match verify_messages(&signature, &messages, &public_keys) { + let messages = [msg]; + let public_keys = [pub_key]; + match verify_messages(&signature, &messages, &public_keys) { true => Ok(()), false => Err(BLSError::SignatureVerificationFailed), }If PublicKey isn’t Copy, keep the Vec path to avoid extra cloning.
10-12: Please ensure aggregated verification has targeted tests.You already include a tests module; make sure there are vectors covering:
- Empty signers,
- Length mismatches,
- Valid aggregate with identical message,
- Negative cases (perturbed agg_sig, wrong payload).
I can draft tests if you want.
Cargo.toml (1)
12-38: Add a unified logging facade to the workspaceSeveral crates print warnings directly (see certs) and others will likely need structured logs. Add
tracing(and optionallytracing-subscriberunder dev/bench) to[workspace.dependencies]so all members use consistent logging, then replaceprintln!withtracing::{warn,info,debug}.[workspace.dependencies] +tracing = "0.1" +# In binaries/examples/tests you can set up a subscriber: +# tracing-subscriber = { version = "0.3", features = ["env-filter"] }merkle/src/lib.rs (2)
64-69: Minor: prefer explicit usize literal to avoid type inference surprisesWhile type inference will coerce
1 << (depth - 1)tousizeviamin, making the literal1usizeavoids accidental i32 shifts if this expression is refactored.- let split = std::cmp::min(1 << (depth - 1), values.len()); + let split = std::cmp::min(1usize << (depth - 1), values.len());
22-30: Optional: accept&[&[u8]]to avoid intermediate Vec allocations
tree(values: &[Vec<u8>])forces callers to allocate Vecs for leaves. If you want broader usability and fewer allocations, consider a more flexible signature like&[&[u8]]. This would be a breaking change, so only consider if the API is still fluid.-pub fn tree(values: &[Vec<u8>]) -> anyhow::Result<MerkleDigest> { +pub fn tree(values: &[&[u8]]) -> anyhow::Result<MerkleDigest> { … - build_tree(depth, values, &mut Keccak256::new()) + build_tree(depth, values, &mut Keccak256::new())And propagate
&[&[u8]]throughbuild_tree. If you’d like, I can prep a PR to convert current call sites.certs/src/lib.rs (2)
388-397: CID derivation path change is sound; avoid lossy conversions in errorsUsing
serialize_cbor()aligns with the new PowerEntries CBOR path. In the error, consider avoiding.into()on CIDs (unless it’s a cheap/clear conversion), and include both expected/actual as strings to aid debugging.- return Err(CertsError::IncorrectPowerDiff { - instance: cert.gpbft_instance, - expected: power_table_cid.into(), - actual: cert.supplemental_data.power_table.into(), - }); + return Err(CertsError::IncorrectPowerDiff { + instance: cert.gpbft_instance, + expected: power_table_cid, + actual: cert.supplemental_data.power_table, + });(Adjust the error type if needed; keeping Cid typed can be preferable.)
693-729: MockVerifier is sufficient; consider a failing path test for quorum/signatureYou’ve covered the happy path. Add a unit test that exercises:
- insufficient quorum (expect InsufficientPower), and
- a verifier that returns Err (expect SignatureVerificationFailed when the insecure feature is OFF).
I can add these tests with a feature-guard to assert the two behaviors consistently.
lightclient/Cargo.toml (1)
9-23: Add logging and document feature expectations to align with certsSince the certs crate will likely use
tracing, add it here for consumers/examples. Also, if lightclient binaries/examples can toggle signature bypass for development, document theinsecure_skip_sig_verifyfeature at the top-level README and ensure examples pass--features certs/insecure_skip_sig_verifyonly in dev scenarios.[dependencies] filecoin-f3-blssig = { path = "../blssig" } filecoin-f3-certs = { path = "../certs" } filecoin-f3-gpbft = { path = "../gpbft" } filecoin-f3-rpc = { path = "../rpc" } anyhow.workspace = true base64 = { workspace = true } keccak-hash.workspace = true num.workspace = true num-bigint = { workspace = true } serde.workspace = true serde_json.workspace = true tokio.workspace = true +tracing.workspace = trueMakefile (1)
37-40: Align clippy and check targets and consider adding PHONYMinor: add
.PHONYfor targets and keep clippy/check target matrices consistent (same crate set/build triplets) to avoid drift.+.PHONY: check test doc bench lint-all deny deny-advisories lint-fmt fmt clippy license spellcheck install-lint-tools install-lint-tools-ci
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (14)
Cargo.toml(2 hunks)Makefile(2 hunks)blssig/Cargo.toml(1 hunks)blssig/src/verifier/mod.rs(1 hunks)certs/Cargo.toml(1 hunks)certs/src/lib.rs(20 hunks)gpbft/Cargo.toml(1 hunks)lightclient/Cargo.toml(1 hunks)lightclient/examples/certificates_validation.rs(1 hunks)lightclient/src/rpc_to_internal.rs(1 hunks)merkle/Cargo.toml(1 hunks)merkle/src/lib.rs(1 hunks)merkle/src/tests.rs(1 hunks)rpc/Cargo.toml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- merkle/Cargo.toml
- blssig/Cargo.toml
- lightclient/examples/certificates_validation.rs
- merkle/src/tests.rs
- certs/Cargo.toml
- rpc/Cargo.toml
- lightclient/src/rpc_to_internal.rs
- gpbft/Cargo.toml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-15T10:20:43.061Z
Learnt from: moshababo
PR: ChainSafe/rust-f3#27
File: certs/src/error.rs:0-0
Timestamp: 2025-08-15T10:20:43.061Z
Learning: In the F3 GPBFT implementation, power values are scaled to a bounded range (0..MAX_POWER where MAX_POWER = 65535) using the power table scaling system. Scaled power values are represented as i64 and used in quorum calculations and error reporting. The InsufficientPower error variant correctly uses i64 for signer_power and total_power since these represent scaled values, not the original StoragePower BigInt values.
Applied to files:
certs/src/lib.rs
🧬 Code graph analysis (2)
blssig/src/verifier/mod.rs (4)
certs/src/lib.rs (4)
new(97-124)verify(704-711)aggregate(713-719)verify_aggregate(721-728)lightclient/src/lib.rs (1)
new(26-32)blssig/src/bdn/mod.rs (1)
new(16-24)gpbft/src/api.rs (3)
verify(22-22)aggregate(32-32)verify_aggregate(45-50)
certs/src/lib.rs (3)
gpbft/src/chain.rs (3)
cid_from_bytes(286-289)new_unvalidated(156-158)new(147-153)gpbft/src/powertable.rs (2)
is_strong_quorum(164-166)signer_scaled_total(146-161)gpbft/src/payload.rs (1)
new(29-43)
🔇 Additional comments (9)
blssig/src/verifier/mod.rs (3)
13-29: Error taxonomy is clear and actionable.Good coverage of common failure modes with precise messages; struct variant on LengthMismatch is particularly helpful for debugging.
31-41: Overall verifier design looks solid; caching via parking_lot RwLock is a good choice.Docs align with intended scheme (BLS12-381, G1 PK/G2 SIG, BDN, cache). Using parking_lot avoids poisoning and has lower overhead than std.
85-109: Verify PublicKey’s Copy/Clone semantics before applying refactorI wasn’t able to locate the definition of
bls_signatures::PublicKeyin the repo to confirm whether it implementsCopyorClone. Before proceeding with the suggested refactor, please verify:
- Does
PublicKeyderive or implementCopy?- If not, does it implement
Clone, and is cloning sufficiently cheap for your performance needs?If
PublicKeyis neitherCopynorClone, you’ll need to storeArc<PublicKey>(or another shared pointer) in the cache and return a clonedArc. Otherwise, you can apply the entry-based approach inget_or_cache_public_keyto avoid duplicate deserialization and unnecessary lock contention.File needing attention:
- blssig/src/verifier/mod.rs lines 85–109 (
get_or_cache_public_key)Cargo.toml (1)
9-10: Confirmed MSRV and Edition PinningAll configurations are consistent:
- Cargo.toml specifies
edition = "2024"andrust-version = "1.85.0".- rust-toolchain.toml pins the channel to
"1.85.0", including common components and targets.No further changes required.
merkle/src/lib.rs (1)
22-30: API shape is clear and matches go-f3; returning Result is goodThe public API and ZERO_DIGEST behavior for empty inputs look sound. Reusing a single Keccak256 with finalize_reset is correct and efficient.
certs/src/lib.rs (3)
262-283: Quorum check and scaling flow look correctScaling first, validating signer indices/power, and then
is_strong_quorummatches the GPBFT power flow. Good guardrails with detailed errors.
381-383: Placement of signature verification is correctVerifying signatures before applying the power-table diffs prevents state from advancing on unauthenticated input. Good ordering.
433-468: Good validations in signer power aggregationBounds checking and zero-effective-power detection on a per-signer basis provide clear diagnostics and safer behavior than relying on external helpers.
Makefile (1)
4-5: WASM check scope is explicit—verify intent to exclude rpc/lightclientYou’ve scoped wasm checks to certs/blssig/merkle/gpbft. If
lightclientandrpcare not intended for wasm, this is fine. If you plan a wasm target (browser light client), consider adding appropriate-pentries and rpc features compatible with wasm.
| let signers_pk: Vec<PubKey> = signers | ||
| .iter() | ||
| .map(|&index| power_table[index as usize].pub_key.clone()) | ||
| .collect(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid panic: use checked indexing when gathering signer pubkeys
Even though signer_scaled_total validates indices, keeping power_table[index as usize] here couples safety to that call. Use checked indexing and return a structured error to prevent latent panics if this code path is reused or refactored.
- let signers_pk: Vec<PubKey> = signers
- .iter()
- .map(|&index| power_table[index as usize].pub_key.clone())
- .collect();
+ let mut signers_pk = Vec::with_capacity(signers.len());
+ for &idx in &signers {
+ let idx = idx as usize;
+ let Some(entry) = power_table.get(idx) else {
+ return Err(CertsError::SignerIndexOutOfBounds {
+ instance: cert.gpbft_instance,
+ signer_index: idx,
+ power_table_size: power_table.len(),
+ });
+ };
+ signers_pk.push(entry.pub_key.clone());
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let signers_pk: Vec<PubKey> = signers | |
| .iter() | |
| .map(|&index| power_table[index as usize].pub_key.clone()) | |
| .collect(); | |
| // Gather signer public keys with bounds-checked indexing | |
| let mut signers_pk = Vec::with_capacity(signers.len()); | |
| for &idx in &signers { | |
| let idx = idx as usize; | |
| let Some(entry) = power_table.get(idx) else { | |
| return Err(CertsError::SignerIndexOutOfBounds { | |
| instance: cert.gpbft_instance, | |
| signer_index: idx, | |
| power_table_size: power_table.len(), | |
| }); | |
| }; | |
| signers_pk.push(entry.pub_key.clone()); | |
| } |
🤖 Prompt for AI Agents
In certs/src/lib.rs around lines 297 to 301, the current code directly indexes
power_table with index as usize which can panic if indices are ever invalid;
change the mapping to use checked indexing (e.g., power_table.get(index as
usize)) and convert the map into a fallible operation that returns a
Result<Vec<PubKey>, Error>, returning a structured error (with context about the
bad signer index) if any get() returns None; collect the pubkeys only on success
and propagate the error upward instead of allowing a panic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
gpbft/Cargo.toml (1)
31-33: Converge getrandom to v0.3 across the workspaceOur Cargo.lock currently contains both
getrandom 0.2.16andgetrandom 0.3.3, which means some dependencies still pull the older 0.2 series. This will bloat the binary and risk feature mismatches (the wasm target only haswasm_jssupport on v0.3).Please eliminate the v0.2.x entry by choosing one of the following approaches:
In the workspace root
Cargo.toml, add a crates.io patch override to force everyone onto v0.3.3:[patch.crates-io] getrandom = { version = "0.3.3", default-features = false, features = ["wasm_js"] }Identify and upgrade any crates that transitively depend on
getrandom = "0.2"(e.g. olderrand_coreversions) to newer releases that depend ongetrandom = "0.3".After applying, run
cargo update -p getrandomand verify viacargo treethat onlygetrandom v0.3.xremains.
♻️ Duplicate comments (2)
gpbft/src/justification.rs (1)
7-15: Strengthen BLS signature typing and add lightweight derives.Using Vec for an aggregate signature hides length/format invariants and makes accidental misuse easier (incorrect length, mutated buffer). Prefer a strongly-typed wrapper with compile-time size. Since this struct is newly added in this PR, you can still refine the public surface now with low churn. Note: a prior review already raised this; adding concrete migration steps and diffs here.
Apply this diff to the struct (adds derives and swaps Vec for a new BlsSignature newtype):
-/// Represents a justification for a decision in the GPBFT consensus protocol -pub struct Justification { +/// Represents a justification for a decision in the GPBFT consensus protocol +#[derive(Clone, Debug, PartialEq, Eq)] +#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] +pub struct Justification { /// The payload that is signed by the signature pub vote: Payload, /// Indexes in the base power table of the signers (`bitset`) pub signers: BitField, - /// BLS aggregate signature of signers - pub signature: Vec<u8>, + /// BLS aggregate signature of signers (compressed G2, 96 bytes) + pub signature: BlsSignature, }Add the newtype and conversions (place in a shared types/crypto module, or above this struct temporarily — adjust the path accordingly):
// Consider crate::types or crate::crypto; shown inline for brevity. #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] pub struct BlsSignature(pub [u8; 96]); impl BlsSignature { pub const LEN: usize = 96; } impl TryFrom<&[u8]> for BlsSignature { type Error = &'static str; fn try_from(bytes: &[u8]) -> Result<Self, Self::Error> { if bytes.len() != Self::LEN { return Err("invalid BLS signature length"); } let mut arr = [0u8; 96]; arr.copy_from_slice(bytes); Ok(BlsSignature(arr)) } } impl TryFrom<Vec<u8>> for BlsSignature { type Error = &'static str; fn try_from(bytes: Vec<u8>) -> Result<Self, Self::Error> { BlsSignature::try_from(bytes.as_slice()) } } impl From<BlsSignature> for Vec<u8> { fn from(sig: BlsSignature) -> Self { sig.0.to_vec() } } impl AsRef<[u8]> for BlsSignature { fn as_ref(&self) -> &[u8] { &self.0 } }Optional serde hint (if you serialize Justification): consider serializing BlsSignature as bytes (or base64 for JSON) via a custom serde module, or keep serialization at RPC boundaries to avoid leaking internal representation.
Verification script to assess impact and conversion sites (adjust patterns as needed):
#!/bin/bash set -euo pipefail echo "=== Find Justification.signature field references ===" rg -nP '\bJustification\b' -C3 || true rg -nP '\.signature\b' -C2 || true echo echo "=== Find APIs expecting Vec<u8> for signatures ===" rg -nP --type=rust -C2 'signature\s*:\s*Vec<u8>|&\s*\[\s*Vec<u8>\s*\]' || true rg -nP --type=rust -C2 'aggregate\w*\([^\)]*Vec<u8>' || true echo echo "=== Find serde usage around Justification (derive or manual impls) ===" rg -nP --type=rust -C2 'derive\([^\)]*(Serialize|Deserialize)' || true echo echo "=== Suggested mechanical changes ===" echo "• Where APIs still use Vec<u8>, insert TryFrom<Vec<u8>> to BlsSignature at call sites." echo "• Where aggregators return Vec<u8>, convert via BlsSignature::try_from(ret)?"If swapping the field type is too invasive right now, minimally gate misuse by adding a constructor that validates length and prefer using it everywhere; but keeping fields pub makes it easy to bypass — see follow-up comment.
lightclient/src/rpc_to_internal.rs (1)
91-107: Integer overflow and potential huge allocation when expanding RLE signers to indices.
- The additions
bit_position + offsetandbit_position += run_lengthcan overflowu64.- Extremely large set runs will attempt to materialize each bit index into
indices, potentially causing O(n) memory blowups.At minimum, make the arithmetic checked and fail early on overflow. Optionally, pre-compute the total number of set bits to reserve capacity.
fn convert_bitfield(signers: Vec<u64>) -> anyhow::Result<BitField> { // RPC receives RLE run lengths as Vec<u64>, but fvm_ipld_bitfield expects // individual bit indices, so converting runs to indices is needed - let mut indices = Vec::new(); - let mut bit_position = 0; + let mut indices: Vec<u64> = Vec::new(); + let mut bit_position: u64 = 0; + + // Optional: reserve capacity for the number of set bits (odd runs) + // This avoids repeated reallocations and catches usize overflow early. + let set_total = signers + .iter() + .enumerate() + .filter(|(i, _)| i % 2 == 1) + .try_fold(0u64, |acc, (_, &len)| acc.checked_add(len)) + .ok_or_else(|| anyhow::anyhow!("bitfield: set run length overflow"))?; + indices + .try_reserve(set_total.try_into().map_err(|_| anyhow::anyhow!("bitfield: set bits exceed addressable memory"))?) + .context("bitfield: failed to reserve indices capacity")?; for (i, &run_length) in signers.iter().enumerate() { if i % 2 == 1 { // Odd indices are set runs - for offset in 0..run_length { - indices.push(bit_position + offset); - } + for offset in 0..run_length { + let idx = bit_position + .checked_add(offset) + .ok_or_else(|| anyhow::anyhow!("bit index overflow while converting RLE to indices"))?; + indices.push(idx); + } } - bit_position += run_length; + bit_position = bit_position + .checked_add(run_length) + .ok_or_else(|| anyhow::anyhow!("bit position overflow while advancing runs"))?; } BitField::try_from_bits(indices).context("failed to create bitfield") }If the underlying bitfield crate supports constructing from ranges, consider building from runs directly instead of expanding to all indices to reduce memory/time overhead.
🧹 Nitpick comments (13)
gpbft/src/justification.rs (2)
11-12: Doc wording nit: Indices vs. Indexes.Use “indices” for index plural in technical docs.
- /// Indexes in the base power table of the signers (`bitset`) + /// Indices in the base power table of the signers (`bitset`)
7-15: Consider constraining construction to enforce invariants.Public fields allow constructing invalid instances (e.g., empty signers, malformed signature length if you keep Vec). Making fields pub(crate) and providing a validated constructor reduces footguns without much boilerplate. Keep as optional if churn is undesirable in this PR.
Example pattern:
impl Justification { pub fn new(vote: Payload, signers: BitField, signature: BlsSignature) -> Result<Self, &'static str> { // Example sanity checks; expand as needed. if signers.is_empty() { return Err("signers bitset must not be empty"); } Ok(Self { vote, signers, signature }) } }If you decide to keep Vec for now, use
signature: Vec<u8>in the constructor and validatelen()==96, then preferJustification::new(...)at all creation sites.lightclient/src/rpc_to_internal.rs (7)
21-40: Validate signature length and clarify treatment ofcommitmentsin supplemental data.
- The signature is base64-decoded but its length isn’t validated. If you expect BLS12-381 compressed signatures (commonly 96 bytes), add a length check to fail early on malformed inputs.
SupplementalData.commitmentsis set toDefault::default(). If RPC provides commitments and they matter for verification, consider populating them or add a comment documenting why it’s intentionally empty.pub fn convert_certificate( val: rpc::FinalityCertificate, ) -> anyhow::Result<certs::FinalityCertificate> { - Ok(certs::FinalityCertificate { + // Decode and validate aggregate signature + let signature = general_purpose::STANDARD + .decode(&val.signature) + .context("FinalityCertificate.signature: invalid base64")?; + // Adjust 96 if the scheme differs + const BLS_SIG_LEN: usize = 96; + if signature.len() != BLS_SIG_LEN { + anyhow::bail!( + "FinalityCertificate.signature must be {} bytes, got {}", + BLS_SIG_LEN, + signature.len() + ); + } + + Ok(certs::FinalityCertificate { gpbft_instance: val.instance, ec_chain: convert_chain(val.ec_chain)?, supplemental_data: SupplementalData { - commitments: Default::default(), + commitments: Default::default(), // TODO: confirm spec: should this be populated from RPC? power_table: Cid::try_from(val.supplemental_data.power_table.cid.as_str())?, }, power_table_delta: val .power_table_delta .into_iter() .map(convert_power_table_delta) .collect::<anyhow::Result<Vec<_>, _>>()?, signers: convert_bitfield(val.signers)?, - signature: general_purpose::STANDARD.decode(&val.signature)?, // base64 -> bytes + signature, }) }If commitments are intentionally omitted, consider adding a brief comment referencing the spec/issue for future readers.
42-57: Minor: simpler, bounds-safe split of base and suffix.Use
split_firstto avoid indexing and an extra temporary.fn convert_chain(val: Vec<rpc::ECTipSet>) -> anyhow::Result<ECChain> { if val.is_empty() { return Err(anyhow::anyhow!("empty ec_chain")); } - let tipsets = val + let tipsets = val .into_iter() .map(convert_tipset) .collect::<anyhow::Result<Vec<_>, _>>(); let tipsets = tipsets?; - let base_native = tipsets[0].clone(); - let suffix = tipsets[1..].to_vec(); + let (base_native, suffix) = tipsets + .split_first() + .map(|(base, rest)| (base.clone(), rest.to_vec())) + .ok_or_else(|| anyhow::anyhow!("empty ec_chain"))?; ECChain::new(base_native, suffix).context("failed to create ECChain") }
12-19: Add decoding context and alignPubKeyconstruction style.Add error context to base64 decoding for easier debugging, and consider using a single construction style for
PubKey(tuple struct vs.new) across the file for consistency.pub fn convert_power_entry(val: rpc::PowerEntry) -> anyhow::Result<filecoin_f3_gpbft::PowerEntry> { Ok(filecoin_f3_gpbft::PowerEntry { id: val.id, power: val.power, - pub_key: PubKey(general_purpose::STANDARD.decode(&val.pub_key)?), // base64 -> bytes -> PubKey + pub_key: PubKey( + general_purpose::STANDARD + .decode(&val.pub_key) + .context("PowerEntry.pub_key: invalid base64")?, + ), // base64 -> bytes -> PubKey }) }If
PubKey::new(bytes)exists and validates length/format, prefer it consistently here and inconvert_power_table_delta.
68-76: Nit: small consistency and micro-alloc improvements for TipsetKey.
- Keep the CID parsing style consistent (use
&streverywhere or ownedStringeverywhere).- Optionally pre-allocate
keywhen the number of refs is known.fn convert_tipset_key(cid_refs: &[rpc::CidRef]) -> anyhow::Result<chain::TipsetKey> { - let mut key = Vec::new(); + let mut key = Vec::with_capacity(cid_refs.len() * 40); // heuristic; CIDs are variable-length for cid_ref in cid_refs { - let cid = Cid::try_from(cid_ref.cid.as_str())?; + let cid = Cid::try_from(cid_ref.cid.as_str())?; key.extend(cid.to_bytes()); } Ok(key) }
78-89: UnifyPubKeycreation and add decode context forsigning_key.Use the same construction style for
PubKeyas inconvert_power_entryand add error context for easier diagnostics.fn convert_power_table_delta(val: rpc::PowerTableDelta) -> anyhow::Result<certs::PowerTableDelta> { Ok(certs::PowerTableDelta { participant_id: val.participant_id, power_delta: filecoin_f3_gpbft::StoragePower::from(val.power_delta), signing_key: match val.signing_key { Some(base64_str) => { - PubKey::new(general_purpose::STANDARD.decode(&base64_str)?) // base64 -> bytes -> PubKey + PubKey( + general_purpose::STANDARD + .decode(&base64_str) + .context("PowerTableDelta.signing_key: invalid base64")?, + ) // base64 -> bytes -> PubKey } None => PubKey::default(), }, }) }
4-11: Minor: centralize base64 engine, optional constants.To reduce repetition and improve readability, consider aliasing the base64 engine and defining constants for expected lengths (if applicable).
use anyhow::Context; -use base64::{Engine as _, engine::general_purpose}; +use base64::{engine::general_purpose, Engine as _}; // ... use keccak_hash::H256; + +// Optional helpers: +// const BLS_SIG_LEN: usize = 96; // if using compressed G2 signatures +// const H256_LEN: usize = 32; +// use general_purpose::STANDARD as B64;
59-66: Type consistency:certs::Tipsetvsgpbft::Tipset.
convert_tipsetcurrently returnscerts::Tipset, butECChain::newexpects GPBFT tipsets. Ifcerts::Tipsetis not a type alias to the same struct, this will not compile or might cause subtle mismatches. Prefer returning the concrete GPBFT type explicitly.If needed, adjust the return type and imports:
-fn convert_tipset(val: rpc::ECTipSet) -> anyhow::Result<certs::Tipset> { +fn convert_tipset(val: rpc::ECTipSet) -> anyhow::Result<filecoin_f3_gpbft::Tipset> { // ... - Ok(certs::Tipset { /* ... */ }) + Ok(filecoin_f3_gpbft::Tipset { /* ... */ }) }Please confirm whether
certs::Tipsetis a re-export/alias ofgpbft::Tipset. If not, the above change is required. If yes, consider importing the aliased type directly to avoid cross-crate confusion.Also applies to: 78-89, 21-40
merkle/src/lib.rs (4)
21-24: Minor: prefer fixed-size arrays for markersUsing fixed-size arrays improves type clarity and avoids implicitly constructing slices each call site.
Apply this small change:
-const INTERNAL_MARKER: &[u8] = &[0x00]; -const LEAF_MARKER: &[u8] = &[0x01]; +const INTERNAL_MARKER: [u8; 1] = [0x00]; +const LEAF_MARKER: [u8; 1] = [0x01];Usage remains the same (
hasher.update(&INTERNAL_MARKER)/&LEAF_MARKER).
25-33: Generalize input to avoid forcing Vec ownership
pub fn tree(values: &[Vec<u8>])forces callers to own each leaf as aVec<u8>. Accepting anyAsRef<[u8]>slice improves ergonomics and avoids needless allocations/copies for callers that already have&[u8].Here’s a minimal, fully-backwards-compatible refactor (keeping
anyhow::Resultas-is) that generalizes the API:-use anyhow::anyhow; +use anyhow::anyhow; /// Computes a merkle tree root from a list of byte values -pub fn tree(values: &[Vec<u8>]) -> anyhow::Result<MerkleDigest> { +pub fn tree<T: AsRef<[u8]>>(values: &[T]) -> anyhow::Result<MerkleDigest> { if values.is_empty() { return Ok(ZERO_DIGEST); } let depth = calculate_depth(values.len()); - build_tree(depth, values, &mut Keccak256::new()) + build_tree(depth, values, &mut Keccak256::new()) } /// Recursive function to build the merkle tree -fn build_tree( - depth: usize, - values: &[Vec<u8>], - hasher: &mut Keccak256, -) -> anyhow::Result<MerkleDigest> { +fn build_tree<T: AsRef<[u8]>>( + depth: usize, + values: &[T], + hasher: &mut Keccak256, +) -> anyhow::Result<MerkleDigest> { if values.is_empty() { return Ok(ZERO_DIGEST); } if depth == 0 { if values.len() != 1 { return Err(anyhow!("expected one value at the leaf")); } // Leaf node: hash(0x01 || value) hasher.update(LEAF_MARKER); - hasher.update(&values[0]); + hasher.update(values[0].as_ref()); let result = hasher.finalize_reset(); let mut digest = [0u8; 32]; digest.copy_from_slice(&result); Ok(digest) } else { // Split point: min(1<<(depth-1), len(values)) let split = std::cmp::min(1 << (depth - 1), values.len()); let left_hash = build_tree(depth - 1, &values[..split], hasher)?; let right_hash = build_tree(depth - 1, &values[split..], hasher)?; // Internal node: hash(0x00 || left || right) hasher.update(INTERNAL_MARKER); - hasher.update(left_hash); - hasher.update(right_hash); + hasher.update(left_hash); + hasher.update(right_hash); let result = hasher.finalize_reset(); let mut digest = [0u8; 32]; digest.copy_from_slice(&result); Ok(digest) } }If you prefer not to change the existing signature, consider adding a convenience overload
tree_from_slices(values: &[&[u8]])that internally delegates.Also applies to: 46-53, 55-66, 67-81
73-77: Nit: pass node digests by referencePassing
[u8; 32]by value copies 32 bytes; passing by reference is slightly clearer and avoids relying onAsRefimpl for arrays. Not critical, but easy to tidy up.- hasher.update(INTERNAL_MARKER); - hasher.update(left_hash); - hasher.update(right_hash); + hasher.update(INTERNAL_MARKER); + hasher.update(&left_hash); + hasher.update(&right_hash);
12-14: Library API returns anyhow::Result — consider a crate-specific error typeFor library crates, a typed error (e.g.,
MerkleErrorwiththiserror) provides better ergonomics and semver stability thananyhow::Result. Not a blocker given the small error surface.Example (outline only):
#[derive(thiserror::Error, Debug)] pub enum MerkleError { #[error("expected one value at the leaf, found {found}")] UnexpectedLeafCount { found: usize }, } pub type Result<T> = core::result::Result<T, MerkleError>;Then return
Result<MerkleDigest>and map the single error site toMerkleError::UnexpectedLeafCount { found: values.len() }.Happy to provide a concrete patch if you prefer this direction.
Also applies to: 25-33, 46-53, 55-58
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (14)
blssig/Cargo.toml(1 hunks)blssig/src/bdn/mod.rs(1 hunks)blssig/src/verifier/mod.rs(1 hunks)blssig/src/verifier/tests.rs(1 hunks)gpbft/Cargo.toml(1 hunks)gpbft/src/justification.rs(1 hunks)gpbft/src/payload.rs(1 hunks)lightclient/examples/certificates_validation.rs(1 hunks)lightclient/src/lib.rs(1 hunks)lightclient/src/rpc_to_internal.rs(1 hunks)merkle/src/lib.rs(1 hunks)merkle/src/tests.rs(1 hunks)rpc/src/lib.rs(1 hunks)rpc/src/types.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- merkle/src/tests.rs
- blssig/Cargo.toml
- rpc/src/types.rs
- blssig/src/verifier/tests.rs
- lightclient/examples/certificates_validation.rs
- blssig/src/bdn/mod.rs
- blssig/src/verifier/mod.rs
- rpc/src/lib.rs
- lightclient/src/lib.rs
- gpbft/src/payload.rs
🧰 Additional context used
🧬 Code graph analysis (1)
lightclient/src/rpc_to_internal.rs (2)
gpbft/src/chain.rs (3)
suffix(200-202)new(147-153)key(271-282)certs/src/lib.rs (1)
new(97-124)
🔇 Additional comments (5)
gpbft/src/justification.rs (2)
1-2: License header and SPDX look good.Header and identifiers are present and consistent.
4-6: Imports are minimal and correct.No unused imports; local types are clearly referenced.
gpbft/Cargo.toml (2)
13-25: Workspace dep consolidation + new merkle crate wiring looks goodSwitching common crates to
workspace = trueand addingfilecoin-f3-merklevia a local path is clean and keeps versions centralized. Nothing blocking here.
27-35: Resolved: removed self dev-dependency and added test utilities via featureThe problematic self dev-dependency is gone;
test-utils = []feature is sufficient for gating. Dev-deps (base64,serde_json) moved to workspace-managed is consistent.merkle/src/lib.rs (1)
35-43: Portable depth calculation — good fixReplacing the hard-coded
64withusize::BITSmakes the tree depth computation portable across 32/64-bit targets. This aligns with previous feedback and looks correct.
| fn convert_tipset(val: rpc::ECTipSet) -> anyhow::Result<certs::Tipset> { | ||
| Ok(certs::Tipset { | ||
| epoch: val.epoch as i64, | ||
| key: convert_tipset_key(&val.key)?, | ||
| power_table: Cid::try_from(val.power_table.cid)?, | ||
| commitments: H256::from_slice(&general_purpose::STANDARD.decode(&val.commitments)?), // base64 -> bytes -> H256 | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prevent panic on malformed commitments; validate length and avoid as cast for epoch.
H256::from_slice panics if the slice length != 32. Right now a malformed base64 string would cause a hard panic. Also, as i64 can silently wrap if val.epoch is unsigned and too large. Prefer fallible conversions with context.
fn convert_tipset(val: rpc::ECTipSet) -> anyhow::Result<certs::Tipset> {
- Ok(certs::Tipset {
- epoch: val.epoch as i64,
- key: convert_tipset_key(&val.key)?,
- power_table: Cid::try_from(val.power_table.cid)?,
- commitments: H256::from_slice(&general_purpose::STANDARD.decode(&val.commitments)?), // base64 -> bytes -> H256
- })
+ // Epoch: avoid silent wrap
+ let epoch: i64 = val
+ .epoch
+ .try_into()
+ .context("ECTipSet.epoch does not fit into i64")?;
+
+ // Commitments: base64 decode + length check to avoid panic
+ let commitments_bytes = general_purpose::STANDARD
+ .decode(&val.commitments)
+ .context("ECTipSet.commitments: invalid base64")?;
+ if commitments_bytes.len() != 32 {
+ anyhow::bail!(
+ "ECTipSet.commitments must be 32 bytes, got {}",
+ commitments_bytes.len()
+ );
+ }
+ let commitments = H256::from_slice(&commitments_bytes);
+
+ Ok(certs::Tipset {
+ epoch,
+ key: convert_tipset_key(&val.key)?,
+ // Keep CID parsing style consistent with other call sites that use &str
+ power_table: Cid::try_from(val.power_table.cid.as_str())?,
+ commitments,
+ })
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn convert_tipset(val: rpc::ECTipSet) -> anyhow::Result<certs::Tipset> { | |
| Ok(certs::Tipset { | |
| epoch: val.epoch as i64, | |
| key: convert_tipset_key(&val.key)?, | |
| power_table: Cid::try_from(val.power_table.cid)?, | |
| commitments: H256::from_slice(&general_purpose::STANDARD.decode(&val.commitments)?), // base64 -> bytes -> H256 | |
| }) | |
| } | |
| fn convert_tipset(val: rpc::ECTipSet) -> anyhow::Result<certs::Tipset> { | |
| // Epoch: avoid silent wrap | |
| let epoch: i64 = val | |
| .epoch | |
| .try_into() | |
| .context("ECTipSet.epoch does not fit into i64")?; | |
| // Commitments: base64 decode + length check to avoid panic | |
| let commitments_bytes = general_purpose::STANDARD | |
| .decode(&val.commitments) | |
| .context("ECTipSet.commitments: invalid base64")?; | |
| if commitments_bytes.len() != 32 { | |
| anyhow::bail!( | |
| "ECTipSet.commitments must be 32 bytes, got {}", | |
| commitments_bytes.len() | |
| ); | |
| } | |
| let commitments = H256::from_slice(&commitments_bytes); | |
| Ok(certs::Tipset { | |
| epoch, | |
| key: convert_tipset_key(&val.key)?, | |
| // Keep CID parsing style consistent with other call sites that use &str | |
| power_table: Cid::try_from(val.power_table.cid.as_str())?, | |
| commitments, | |
| }) | |
| } |
| /// Computes a merkle tree root from a list of byte values | ||
| pub fn tree(values: &[Vec<u8>]) -> anyhow::Result<MerkleDigest> { | ||
| if values.is_empty() { | ||
| return Ok(ZERO_DIGEST); | ||
| } | ||
|
|
||
| let depth = calculate_depth(values.len()); | ||
| build_tree(depth, values, &mut Keccak256::new()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Confirm padding semantics match go-f3 (zero right-subtree vs. leaf duplication)
The split logic pads missing branches with ZERO_DIGEST when values[split..] is empty. That’s a common approach (power-of-two padding), but some Merkle variants duplicate the last leaf instead. Please confirm this matches go-f3’s merkle exactly for odd/non-power-of-two sizes and keep a golden test to prevent regressions.
Add/verify a test like:
- inputs: 3 leaves with distinct values
- expected: root matches go-f3 reference output
If you want me to wire it, I can open a follow-up PR with cross-language golden vectors.
Optionally, you can double-check the go-f3 behavior:
Also applies to: 67-72
🌐 Web query:
go-f3 merkle implementation: for non-power-of-two leaf counts, does the tree pad missing branches with zero digest or duplicate the last leaf? Provide code reference.
💡 Result:
The go-f3 Merkle implementation, for non-power-of-two leaf counts, duplicates the last leaf to fill missing branches, not zero padding. This is the standard construction for many Merkle trees to ensure every internal node has two children, maintaining binary tree invariants.
Supporting Details
-
The official go-f3 repository at GitHub references a
/merklepackage for the Merkle tree implementation[9]. Although the search results do not display the full code, this is consistent with how binary Merkle trees are usually constructed in other well-known Go implementations. -
A typical Go Merkle tree pattern (as found in tutorials and other libraries) is:
for i := 0; i < len(nodes); i += 2 { left := nodes[i] right := left if i+1 < len(nodes) { right = nodes[i+1] } parent := &MerkleNode{ Hash: calculateHash(left.Hash + right.Hash), Left: left, Right: right, } newLevel = append(newLevel, parent) }
In this approach, when the number of nodes is odd, the loop assigns the last node’s right child as itself, which duplicates the last leaf[1]. This is not zero padding.
-
Multiple Merkle tree libraries and blockchain protocols (such as Bitcoin) use leaf duplication rather than inserting a hash of zeros. For example, in Bitcoin, “If the last leaf in the tree is the left branch, then that last leaf is duplicated”[2].
Code Reference & Analysis
While the search did not contain the full code for go-f3’s /merkle package, the structure of adjacent Filecoin Go repositories and the provided snippet from [1] strongly indicate duplication rather than zero padding is used. This standard practice ensures deterministic trees compatible with verification logic in distributed or adversarial environments.
- Key implementation pattern:
This can be seen in [1] and is referenced by many Go-based Merkle libraries.
right := left // duplicate the last leaf if odd count if i+1 < len(nodes) { right = nodes[i+1] }
If you require absolute confirmation for go-f3 specifically, reviewing the /merkle package source in the official [filecoin-project/go-f3 GitHub repository][9] is recommended, looking for this logic in the tree construction function.
Summary:
go-f3 (and typical Go Merkle tree implementations) handle non-power-of-two leaf counts by duplicating the last leaf, not by padding with zeros. This is consistent with both code patterns in [1] and established cryptographic protocol documentation[2].
Citations:
- 1: https://www.tutorialspoint.com/golang-program-to-implement-a-merkle-tree
- 2: https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-Use-Them-Who-Uses-T
- 3: https://pkg.go.dev/github.com/keybase/go-merkle-tree
- 4: https://crypto.stanford.edu/~dabo/cryptobook/BonehShoup_0_4.pdf
- 5: https://forum.golangbridge.org/t/high-performance-go-merkle-tree-implementation/28490
- 6: https://core.ac.uk/download/482106615.pdf
- 7: https://github.com/cbergoon/merkletree
- 8: https://news.ycombinator.com/item?id=32186203
- 9: https://github.com/filecoin-project/go-f3
- 10: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-20.pdf
🏁 Script executed:
#!/bin/bash
# Locate merkle/src/lib.rs in the repository
find . -type f | grep -E 'merkle/src/lib.rs$' || echo "merkle/src/lib.rs not found"
# Locate any file defining `pub fn tree` under merkle directory
grep -R "pub fn tree" -n merkle/Length of output: 189
🏁 Script executed:
#!/bin/bash
set -e
# Locate build_tree implementation
grep -R "fn build_tree" -n merkle/src/lib.rs -n
# Locate calculate_depth implementation
grep -R "fn calculate_depth" -n merkle/src/lib.rs -nLength of output: 179
🏁 Script executed:
#!/bin/bash
# Display context around the build_tree function in merkle/src/lib.rs
grep -R -n -C5 "fn build_tree" merkle/src/lib.rsLength of output: 369
🏁 Script executed:
#!/bin/bash
# Extract the full build_tree function from merkle/src/lib.rs
sed -n '46,/^}/p' merkle/src/lib.rsLength of output: 1206
Mismatch with Go-F3 Merkle semantics: implement leaf-duplication instead of zero-padding
The current build_tree unconditionally returns ZERO_DIGEST whenever a subtree’s values slice is empty, resulting in zero-padding for non-power-of-two leaf counts. In contrast, Go-F3 (and most standard Merkle constructions) duplicate the last leaf when encountering an odd number of nodes. This discrepancy will lead to divergent roots and breaks cross‐implementation compatibility.
Please update merkle/src/lib.rs so that when values.is_empty(), you instead use the last leaf hash from the sibling subtree rather than ZERO_DIGEST. Specifically:
- In the internal-node branch (around line 59), compute
right_hashby duplicatingleft_hashifvalues[split..]is empty. - Remove the top‐level
if values.is_empty()early return inbuild_tree, and handle empty slices only at the split-point.
Additionally, add a golden‐vector test to lock in the correct behavior:
- Input: three distinct leaves
- Expected: Merkle root matches Go-F3’s output (e.g. from its
/merklepackage or an external reference implementation)
Diff sketch:
fn build_tree(…,) -> anyhow::Result<MerkleDigest> {
- if values.is_empty() {
- return Ok(ZERO_DIGEST);
- }
if depth == 0 {
… // unchanged leaf logic
} else {
let split = min(1<<(depth-1), values.len());
let left_hash = build_tree(depth-1, &values[..split], hasher)?;
- let right_hash = build_tree(depth-1, &values[split..], hasher)?;
+ let right_hash = if split == values.len() {
+ // duplicate last leaf
+ left_hash
+ } else {
+ build_tree(depth-1, &values[split..], hasher)?
+ };
… // internal hash
}
}This change preserves the standard Merkle convention and aligns with Go-F3’s duplication strategy.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Computes a merkle tree root from a list of byte values | |
| pub fn tree(values: &[Vec<u8>]) -> anyhow::Result<MerkleDigest> { | |
| if values.is_empty() { | |
| return Ok(ZERO_DIGEST); | |
| } | |
| let depth = calculate_depth(values.len()); | |
| build_tree(depth, values, &mut Keccak256::new()) | |
| } | |
| fn build_tree( | |
| depth: usize, | |
| values: &[Vec<u8>], | |
| hasher: &mut Keccak256, | |
| ) -> anyhow::Result<MerkleDigest> { | |
| // ─── remove zero‐padding ───────────────────────────────────────── | |
| - if values.is_empty() { | |
| - return Ok(ZERO_DIGEST); | |
| - } | |
| // leaf node | |
| if depth == 0 { | |
| // (unchanged) require exactly one value, hash marker||value | |
| if values.len() != 1 { | |
| return Err(anyhow!("expected exactly one leaf at depth 0, got {}", values.len())); | |
| } | |
| hasher.update(LEAF_MARKER); | |
| hasher.update(&values[0]); | |
| let result = hasher.finalize_reset(); | |
| let mut digest = [0u8; 32]; | |
| digest.copy_from_slice(&result); | |
| return Ok(digest); | |
| } | |
| // internal node | |
| let split = std::cmp::min(1 << (depth - 1), values.len()); | |
| let left_hash = build_tree(depth - 1, &values[..split], hasher)?; | |
| let right_hash = if split == values.len() { | |
| // duplicate last leaf when odd count | |
| left_hash | |
| } else { | |
| build_tree(depth - 1, &values[split..], hasher)? | |
| }; | |
| // hash internal marker||left||right | |
| hasher.update(INTERNAL_MARKER); | |
| hasher.update(&left_hash); | |
| hasher.update(&right_hash); | |
| let result = hasher.finalize_reset(); | |
| let mut digest = [0u8; 32]; | |
| digest.copy_from_slice(&result); | |
| Ok(digest) | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
Cargo.toml (4)
31-32: Deterministic CBOR: ensure canonical encoding settings are consistently appliedYou’ve pinned
serde_cbor = "0.11.2". For F3 certificate/powertable parity with go-f3, CBOR must be deterministic/canonical.serde_cboris non-deterministic by default; make sure serialization paths use aSerializerconfigured for canonical ordering and integer encoding. Consider centralizing this in a small helper crate or module to avoid drift.If you want to make the intent explicit via features:
-serde_cbor = "0.11.2" +serde_cbor = { version = "0.11.2", default-features = true } +# Consider a crate-local feature to gate canonical encoding helpers if you plan variants.No change required here if the implementing code already enforces canonical CBOR; this is a reminder to prevent subtle consensus mismatches.
33-34: Add documentation clarifying separate roles ofsha3andkeccak-hashOur grep confirms that both crates are in active use:
sha3 = "0.10.8"is used in merkle/src/lib.rs to importsha3::{Digest, Keccak256}and build the Merkle tree with Keccak256 from the FIPS-201 SHA3 implementation.keccak-hash = "0.11"supplies thekeccak_hash::H256type across lightclient, gpbft, and certs modules for Ethereum-style Keccak-256 digests.Since both implementations are legitimately required, please add brief inline documentation to clarify their distinct purposes:
• In merkle/src/lib.rs, above the Merkle tree builder, note that it uses
sha3::Keccak256(the FIPS SHA3 variant) to produce the 32-byte root matching go-f3’s MerkleDigest.
• In lightclient/src/rpc_to_internal.rs (and similarly in gpbft/src/** and certs/src/lib.rs), annotate thatkeccak_hash::H256represents the Ethereum-style Keccak-256 digest type.These comments will steer future readers away from conflating the two crates’ hashing semantics.
15-36: Dependency Audit Results & RecommendationsThe freshness audit of Cargo.toml (lines 15–36) shows several crates are behind their latest stable minor/patch releases, and one unmaintained crate flagged by OSV. Please consider the following updates before merging:
• Update minor/patch versions
- base64: 0.22 → 0.22.1
- cid: 0.10.1 → 0.11.1
- fvm_ipld_bitfield: 0.7.1 → 0.7.2
- hex: 0.4 → 0.4.3
- jsonrpsee: 0.24 → 0.26.0
- num: 0.4 → 0.4.3
- parking_lot: 0.12 → 0.12.4
- rand: 0.8 → 0.9.2
- strum_macros: 0.26 → 0.27.2
• Major-version wildcards (ensure lockfile pins concrete patch)
- serde = “1” (latest 1.0.219)
- serde_json = “1” (latest 1.0.143)
- tokio = “1” (latest 1.47.1)
• Security concern
- serde_cbor 0.11.2 is unmaintained (RUSTSEC-2021-0127); consider upgrading, replacing, or auditing its safety before use.
Action Items
- Update the above outdated crate versions in Cargo.toml.
- Run
cargo update -p <crate>for each, then verify compatibility/tests.- Pin wildcard-major deps in Cargo.lock to concrete patch versions.
- Address the serde_cbor unmaintenance advisory (review alternatives like
ciboriumor upstream patch).Let me know if you need help with any specific bump or compatibility issue.
15-19: Confirm both BLS crates are in use—consider gating low-levelbls12_381behind a featureYour grep output shows that
bls12_381::{G1Projective, G2Projective}is explicitly used in
- blssig/src/bdn/mod.rs (aggregation logic on lines 10, 39–45, 59–65)
Meanwhile,
bls-signaturesis used for high-level signature deserialization and conversion (also in the BDN module). If every consumer of this workspace needs both the production-grade verifier and the BDN aggregation math, leaving both dependencies unconditionally is fine—just document that split. If, however, you want a “light-client” build without the BDN scaffolding (and its compile-time + binary-size impact), it’s better to markbls12_381optional and expose it under a feature.• In your root Cargo.toml, change:
-bls12_381 = "0.8" +bls12_381 = { version = "0.8", optional = true } +[features] +# Enables the BDN aggregation APIs in blssig/src/bdn +bdn = ["bls12_381"]• In blssig/Cargo.toml, mirror that feature:
-[dependencies] -bls12_381 = "0.8" +[dependencies] +bls12_381 = { version = "0.8", optional = true } +[features] +# pull in the low-level BLS math only when requested +bdn = ["bls12_381"]• Document in your README or crate docs when consumers should opt in to
bdn.This optional-feature approach keeps your light-client footprint small while still supporting full BDN math when enabled.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
Cargo.toml(2 hunks)lightclient/Cargo.toml(1 hunks)rpc/Cargo.toml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- rpc/Cargo.toml
- lightclient/Cargo.toml
🔇 Additional comments (3)
Cargo.toml (3)
12-37: Centralizing deps in [workspace.dependencies] is a solid moveGood call moving shared deps to the workspace. This will reduce version skew across
lightclient,merkle,rpc, and existing crates and makes future upgrades easier.
25-27: Drop the unusednumdependency
A global search of all Rust source files fornum::andextern crate numreturned no hits, indicating the umbrellanumcrate isn’t directly used. Removing it will prevent pulling in extra transitive dependencies. Please runcargo checkor otherwise verify (e.g., inspect build scripts, examples, docs) that nothing is relying onnumbefore merging.• File: Cargo.toml (lines 25–27)
-num = "0.4" num-bigint = { version = "0.4.6", features = ["serde"] } num-traits = "0.2.19"
23-24: Please verify end-to-end HTTPS connectivity with current featuresWe’ve confirmed that pulling in
jsonrpsee = { version = "0.24", features = ["http-client", "ws-client"] }already brings in TLS support by default:
- The
http-clientfeature enables thetlsfeature ofjsonrpsee-http-client(which in turn uses hyper-rustls, rustls, and rustls-platform-verifier) (lib.rs).- The
ws-clientfeature selectstls-rustls-platform-verifierby default injsonrpsee-ws-client(lib.rs).No additional Cargo features (e.g. an explicit
tls-rustls) are required unless you’ve disabled default‐features on these crates.• Please run the examples against
•https://filecoin-calibration.ipc.space/rpc/v1
•https://filecoin.ipc.space/rpc/v1
and confirm that the HTTP client connects without TLS errors.
• If you do encounter TLS failures, you can add the appropriate feature, for example:-jsonrpsee = { version = "0.24", features = ["ws-client", "http-client"] } +jsonrpsee = { version = "0.24", features = ["ws-client", "http-client", "tls-rustls"] }Otherwise, no changes are needed here.
| [workspace] | ||
| resolver = "2" | ||
| members = ["blssig", "certs", "gpbft"] | ||
| members = ["blssig", "certs", "gpbft", "lightclient", "merkle", "rpc"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Workspace members added: ensure per-crate inheritance of workspace package fields
Nice addition of lightclient, merkle, and rpc to the workspace. Please verify each new crate’s Cargo.toml inherits the workspace’s package metadata (edition, rust-version, license, repository) to keep CI/tooling consistent. If not already present, add:
[package]
name = "lightclient"
version = "0.1.0"
-# ...
+edition.workspace = true
+rust-version.workspace = true
+license.workspace = true
+repository.workspace = trueApply the same in merkle/Cargo.toml and rpc/Cargo.toml. This avoids accidental drift (e.g., building with a different edition/MSRV locally vs CI).
Run this to confirm inheritance is set across all new crates:
🏁 Script executed:
#!/bin/bash
# Lists any package.tomls missing explicit workspace inheritance for key fields
fd Cargo.toml | grep -Ev '^\./Cargo\.toml$' | xargs -I{} sh -c '
echo ">>> {}"
rg -n "edition\.workspace\s*=\s*true|rust-version\.workspace\s*=\s*true|license\.workspace\s*=\s*true|repository\.workspace\s*=\s*true" -n {} || echo " Missing one or more workspace field inheritances"
'Length of output: 1089
Add missing rust-version.workspace inheritance in new crates
The verification script shows that while lightclient/Cargo.toml, merkle/Cargo.toml, and rpc/Cargo.toml each include:
[package]
repository.workspace = true
license.workspace = true
edition.workspace = truethey are missing the rust-version.workspace = true line. Please add this under the [package] section for each crate to ensure the workspace MSRV is consistently applied.
Affected files:
- lightclient/Cargo.toml
- merkle/Cargo.toml
- rpc/Cargo.toml
Example diff for lightclient/Cargo.toml:
[package]
name = "lightclient"
version = "0.1.0"
repository.workspace = true
license.workspace = true
edition.workspace = true
+rust-version.workspace = trueRepeat the same insertion in merkle/Cargo.toml and rpc/Cargo.toml.
🤖 Prompt for AI Agents
In Cargo.toml (workspace root) around line 3 and in the three affected crate
manifests lightclient/Cargo.toml, merkle/Cargo.toml, and rpc/Cargo.toml under
their [package] sections, add rust-version.workspace = true alongside the
existing repository.workspace = true, license.workspace = true, and
edition.workspace = true entries so each crate inherits the workspace MSRV;
update each file by inserting that single line under [package] in the same style
and indentation as the other .workspace entries.
| // This logging allows development to continue. | ||
| // TODO: Remove this workaround once BDN is implemented | ||
| if let Err(err) = res { | ||
| println!("WARN: {}", err); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not print unconditionally to STDOUT. This breaks some monitoring tooling and is not really sustainable. I'd use https://crates.io/crates/tracing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was intended to work in conjunction with lightclient/examples, which intentionally doesn't do structured logging, as it is not an actual application.
Additionally, the verifying packages do not emit any logs—and I don’t think they should. Hence this is just a temporary unstructured note to flag the missing functionality.
| } | ||
|
|
||
| /// Check whether a portion of storage power is a strong quorum of the total | ||
| pub fn is_strong_quorum(power: i64, power_total: i64) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can the power and power_total be negative? If not, could we use u64 and its stabilized div_ceil?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it's the scaled value, it's actually bounded in u16.
But go-f3 uses i64, so I followed that. https://github.com/filecoin-project/go-f3/blob/f9c95ae52d9ab64b4f673d31da4a7bcac610216e/gpbft/gpbft.go#L1489
| /// - Point cache for performance optimization | ||
| pub struct BLSVerifier { | ||
| /// Cache for deserialized public key points to avoid expensive repeated operations | ||
| point_cache: RwLock<LruCache<Vec<u8>, PublicKey>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd change the signatures of the Verifier trait methods to use strongly-typed public key and signature to avoid conversions and such LRU perf optimization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I followed the existing trait Verifier, which follows type Verifier interface from go-f3, who's using untyped params for all functions.
However, I agree that the bdn module can/should receive the typed data, so I changed that here: 3608b91
I'm not convinced though that Verifier should do the same, since it will merely push the validity checks, the deserialization and the caching to a higher layer, given that the data source is serialized.
|
|
||
| /// Serializes the payload for signing. | ||
| /// Format: `GPBFT:network_name:phase+round+instance+commitments+chain_key+power_table_cid` | ||
| pub fn serialize_for_signing(&self, network_name: &str) -> Vec<u8> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to have some go-f3 interop tests for such functions, but maybe we could tackle that in a subsequent PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, for now I tried to avoid relying on adding new tests to go-f3. Can do that once this PR gets merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
blssig/src/verifier/mod.rs (2)
16-32: Differentiate empty public-keys vs empty signatures.Returning
EmptyPublicKeyswhensigsis empty is misleading. IntroduceEmptySignaturesand use it accordingly.Apply this diff:
@@ pub enum BLSError { #[error("empty public keys provided")] EmptyPublicKeys, + #[error("empty signatures provided")] + EmptySignatures, @@ fn aggregate(&self, pub_keys: &[PubKey], sigs: &[Vec<u8>]) -> Result<Vec<u8>, Self::Error> { - if pub_keys.is_empty() || sigs.is_empty() { - return Err(BLSError::EmptyPublicKeys); - } + if pub_keys.is_empty() { + return Err(BLSError::EmptyPublicKeys); + } + if sigs.is_empty() { + return Err(BLSError::EmptySignatures); + }Also applies to: 123-131
58-60: Document the cache memory budget.Add a brief note so operators know the upper bound. Prior comment estimated ~1.8–2 MiB.
Apply this diff:
-/// Maximum number of cached public key points to prevent excessive memory usage +/// Maximum number of cached public key points to prevent excessive memory usage. +/// Note: with ~48-byte keys and ~192-byte `PublicKey` items (impl-dependent), +/// 10_000 entries is roughly O(2 MiB) plus map overhead. const MAX_POINT_CACHE_SIZE: usize = 10_000;
🧹 Nitpick comments (3)
blssig/src/verifier/mod.rs (3)
34-41: Doc claims full BDN support; codebase notes say BDN weighting isn’t implemented yet.Align the comment with current behavior to avoid misleading readers.
Apply this diff:
-/// This verifier implements the same scheme used by `go-f3/blssig`, with: +/// This verifier targets the same scheme used by `go-f3/blssig`, with: /// - BLS12_381 curve /// - G1 for public keys, G2 for signatures -/// - BDN aggregation for rogue-key attack prevention +/// - BDN aggregation for rogue-key attack prevention (TODO: coefficient weighting not yet implemented; +/// current aggregation uses standard BLS and will not verify go-f3 aggregates) /// - Point cache for performance optimization
148-168: BDN weighting gap likely breaks aggregate verification against go-f3.
verify_aggregatedepends onBDNAggregation::aggregate_pub_keys(), but the BDN weighting is still a TODO inbdn. Expect incompatibility with go-f3 aggregates until implemented. Consider gating with a feature flag or clearer error/warn surface here to reduce confusion in downstream users.I can help wire a feature flag and an explicit
BLSError::UnsupportedAggregationModeuntil BDN weighting lands; want a patch?
94-101: Optional: reduce write-lock contention on cache hits.Using a write lock for lookups is fine with
LruCache::get, but if you don’t need recency refresh on read, switch topeek()under a read lock for higher concurrency.Example:
if let Some(pk) = self.point_cache.read().peek(pub_key_bytes).cloned() { return Ok(pk); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
blssig/src/verifier/mod.rs(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
blssig/src/verifier/mod.rs (4)
blssig/src/bdn/mod.rs (1)
new(19-27)lightclient/src/lib.rs (1)
new(29-35)certs/src/lib.rs (4)
new(97-124)verify(703-710)aggregate(712-718)verify_aggregate(720-727)gpbft/src/api.rs (3)
verify(22-22)aggregate(32-32)verify_aggregate(45-50)
| /// Gets a cached public key or deserialize and caches it | ||
| fn get_or_cache_public_key(&self, pub_key_bytes: &[u8]) -> Result<PublicKey, BLSError> { | ||
| // Check cache first | ||
| let mut cache = self.point_cache.write(); | ||
| if let Some(cached_key) = cache.get(pub_key_bytes) { | ||
| return Ok(*cached_key); | ||
| } | ||
| drop(cache); | ||
|
|
||
| // Deserialize the public key | ||
| let pub_key = | ||
| PublicKey::from_bytes(pub_key_bytes).map_err(|_| BLSError::PublicKeyDeserialization)?; | ||
|
|
||
| // Cache it | ||
| let mut cache = self.point_cache.write(); | ||
| cache.insert(pub_key_bytes.to_vec(), pub_key); | ||
| drop(cache); | ||
|
|
||
| Ok(pub_key) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix Copy/move bugs in cache path (PublicKey is not guaranteed Copy).
Returning *cached_key and pub_key after inserting assumes PublicKey: Copy. This will either fail to compile or cause unnecessary copying. Clone the key for the caller and keep an owned copy in the cache. Also, avoid lock re-acquisition and manual drop.
Apply this diff:
- fn get_or_cache_public_key(&self, pub_key_bytes: &[u8]) -> Result<PublicKey, BLSError> {
- // Check cache first
- let mut cache = self.point_cache.write();
- if let Some(cached_key) = cache.get(pub_key_bytes) {
- return Ok(*cached_key);
- }
- drop(cache);
-
- // Deserialize the public key
- let pub_key =
- PublicKey::from_bytes(pub_key_bytes).map_err(|_| BLSError::PublicKeyDeserialization)?;
-
- // Cache it
- let mut cache = self.point_cache.write();
- cache.insert(pub_key_bytes.to_vec(), pub_key);
- drop(cache);
-
- Ok(pub_key)
- }
+ fn get_or_cache_public_key(&self, pub_key_bytes: &[u8]) -> Result<PublicKey, BLSError> {
+ // Fast path: refresh LRU on hit and clone for the caller
+ if let Some(pk) = self.point_cache.write().get(pub_key_bytes).cloned() {
+ return Ok(pk);
+ }
+ // Miss: deserialize outside of locks
+ let pk =
+ PublicKey::from_bytes(pub_key_bytes).map_err(|_| BLSError::PublicKeyDeserialization)?;
+ // Insert and return a clone
+ self.point_cache.write().insert(pub_key_bytes.to_vec(), pk.clone());
+ Ok(pk)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Gets a cached public key or deserialize and caches it | |
| fn get_or_cache_public_key(&self, pub_key_bytes: &[u8]) -> Result<PublicKey, BLSError> { | |
| // Check cache first | |
| let mut cache = self.point_cache.write(); | |
| if let Some(cached_key) = cache.get(pub_key_bytes) { | |
| return Ok(*cached_key); | |
| } | |
| drop(cache); | |
| // Deserialize the public key | |
| let pub_key = | |
| PublicKey::from_bytes(pub_key_bytes).map_err(|_| BLSError::PublicKeyDeserialization)?; | |
| // Cache it | |
| let mut cache = self.point_cache.write(); | |
| cache.insert(pub_key_bytes.to_vec(), pub_key); | |
| drop(cache); | |
| Ok(pub_key) | |
| } | |
| /// Gets a cached public key or deserialize and caches it | |
| fn get_or_cache_public_key(&self, pub_key_bytes: &[u8]) -> Result<PublicKey, BLSError> { | |
| // Fast path: refresh LRU on hit and clone for the caller | |
| if let Some(pk) = self.point_cache.write().get(pub_key_bytes).cloned() { | |
| return Ok(pk); | |
| } | |
| // Miss: deserialize outside of locks | |
| let pk = | |
| PublicKey::from_bytes(pub_key_bytes).map_err(|_| BLSError::PublicKeyDeserialization)?; | |
| // Insert and return a clone | |
| self.point_cache.write().insert(pub_key_bytes.to_vec(), pk.clone()); | |
| Ok(pk) | |
| } |
🤖 Prompt for AI Agents
In blssig/src/verifier/mod.rs around lines 94 to 113, the cache path incorrectly
dereferences and moves a PublicKey (assuming Copy) and reacquires the lock;
change to acquire the write lock once, check for an existing key and if present
return a cloned key, otherwise deserialize into an owned PublicKey, insert it
into the cache, and return a clone for the caller; avoid manual drop and
duplicate lock acquisition by using a single write lock or using the map entry
API so the cached value remains owned while callers receive a cloned copy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (3)
blssig/src/bdn/mod.rs (3)
17-26: Avoid extra alloc; accept Vec and store as-isReduces copies and gives callers control over ownership. This also aligns with a prior suggestion in the thread.
- pub fn new(pub_keys: &[PublicKey]) -> Result<Self, BLSError> { - if pub_keys.is_empty() { + pub fn new(pub_keys: Vec<PublicKey>) -> Result<Self, BLSError> { + if pub_keys.is_empty() { return Err(BLSError::EmptyPublicKeys); } - - Ok(Self { - pub_keys: pub_keys.to_vec(), - }) + Ok(Self { pub_keys }) }
4-7: Security: Placeholder aggregation must be feature-gated or error outThis module explicitly returns incorrect results vs go-f3. Ship this behind a non-default feature (e.g., "insecure_bdn") or hard-error in non-test builds to avoid accidental use in production.
Apply at top of the file:
+// Prevent accidental production use of placeholder aggregation. +#[cfg(all(not(test), not(feature = "insecure_bdn")))] +compile_error!("BDN aggregation not implemented. Enable 'insecure_bdn' for test-only placeholder aggregation.");
28-48: BDN weighting missing; also enforce signer↔sig alignment
- Today this sums plain BLS signatures; BDN requires per-key coefficients.
- Len check doesn’t ensure correct pairing or ordering of (pub_key, sig).
Consider (a) accepting zipped pairs or an index map, and (b) gating placeholder aggregation behind a feature. Minimal signature tweak:
- pub fn aggregate_sigs(&self, sigs: &[Signature]) -> Result<Signature, BLSError> { + pub fn aggregate_sigs(&self, pairs: &[(PublicKey, Signature)]) -> Result<Signature, BLSError> { - if sigs.len() != self.pub_keys.len() { + if pairs.len() != self.pub_keys.len() { return Err(BLSError::LengthMismatch { pub_keys: self.pub_keys.len(), - sigs: sigs.len(), + sigs: pairs.len(), }); } - let mut agg_point = G2Projective::identity(); - for sig in sigs { - let sig: G2Affine = (*sig).into(); - agg_point += sig; - } + // TODO(BDN): derive coefficient per pubkey and weight each signature accordingly. + let mut agg_point = G2Projective::identity(); + for (_pk, sig) in pairs { + let sig_aff: G2Affine = (*sig).into(); + agg_point += sig_aff; + } let agg_sig: Signature = agg_point.into(); Ok(agg_sig) }
🧹 Nitpick comments (3)
blssig/src/bdn/mod.rs (3)
12-15: Derive Debug/Clone for ergonomicsHelps logging and tests; no API risk.
- pub struct BDNAggregation { + #[derive(Debug, Clone)] + pub struct BDNAggregation {
50-63: Precompute and cache BDN coefficients in the contextWhen BDN lands, compute coefficients once in new() and store alongside pub_keys to avoid O(n) re-derivation per call; also canonicalize the signer set (stable sort by bytes) to make coefficients order-agnostic.
38-43: Minor: prefer iterator folds for clarityEquivalent, slightly cleaner.
- let mut agg_point = G2Projective::identity(); - for sig in sigs { - let sig: G2Affine = (*sig).into(); - agg_point += sig; - } + let agg_point = sigs.iter().fold(G2Projective::identity(), |acc, s| { + let a: G2Affine = (*s).into(); + acc + a + });- let mut agg_point = G1Projective::identity(); - for pub_key in &self.pub_keys { - let pub_key_point: G1Projective = (*pub_key).into(); - agg_point += pub_key_point; - } + let agg_point = self.pub_keys.iter().fold(G1Projective::identity(), |acc, pk| { + let p: G1Projective = (*pk).into(); + acc + p + });Also applies to: 54-58
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
blssig/src/bdn/mod.rs(1 hunks)blssig/src/verifier/mod.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- blssig/src/verifier/mod.rs
| use crate::verifier::BLSError; | ||
| use bls_signatures::{PublicKey, Signature}; | ||
| use bls12_381::{G1Projective, G2Affine, G2Projective}; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Verify cross-crate conversions (bls_signatures <-> bls12_381)
You’re converting bls_signatures::{Signature,PublicKey} into bls12_381::{G2Affine,G1Projective}. These From/Into impls may not exist; mixing curve types across crates often fails.
Run to confirm conversions or existing adapters:
🏁 Script executed:
#!/bin/bash
# Look for any local adapters/impls bridging bls_signatures <-> bls12_381
rg -nP -C2 'impl\s+(From|Into)<\s*bls_signatures::(Signature|PublicKey)\s*>\s+for\s+bls12_381::(G[12](Affine|Projective))'
rg -nP -C2 'impl\s+(From|Into)<\s*bls12_381::(G[12](Affine|Projective))\s*>\s+for\s+bls_signatures::(Signature|PublicKey)'
# Scan for helper conversion fns
rg -nP -C2 '(to|into|as)_(g1|g2)_(proj|affine)|from_(g1|g2)'Length of output: 0
Add missing conversions between bls_signatures and bls12_381 types
No From/Into impls or helper functions exist for converting bls_signatures::PublicKey → G1Projective or bls_signatures::Signature → G2Affine/G2Projective. Define explicit From/Into impls or use crate-provided methods to bridge these types in blssig/src/bdn/mod.rs.
| /// Aggregates signatures using standard BLS aggregation | ||
| /// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29 | ||
| pub fn aggregate_sigs(&self, sigs: &[Signature]) -> Result<Signature, BLSError> { | ||
| if sigs.len() != self.pub_keys.len() { | ||
| return Err(BLSError::LengthMismatch { | ||
| pub_keys: self.pub_keys.len(), | ||
| sigs: sigs.len(), | ||
| }); | ||
| } | ||
|
|
||
| // Standard BLS aggregation | ||
| let mut agg_point = G2Projective::identity(); | ||
| for sig in sigs { | ||
| let sig: G2Affine = (*sig).into(); | ||
| agg_point += sig; | ||
| } | ||
|
|
||
| // Convert back to Signature | ||
| let agg_sig: Signature = agg_point.into(); | ||
| Ok(agg_sig) | ||
| } | ||
|
|
||
| /// Aggregates public keys using standard BLS aggregation | ||
| /// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29 | ||
| pub fn aggregate_pub_keys(&self) -> Result<PublicKey, BLSError> { | ||
| // Standard BLS aggregation | ||
| let mut agg_point = G1Projective::identity(); | ||
| for pub_key in &self.pub_keys { | ||
| let pub_key_point: G1Projective = (*pub_key).into(); | ||
| agg_point += pub_key_point; | ||
| } | ||
|
|
||
| // Convert back to PublicKey | ||
| let agg_pub_key: PublicKey = agg_point.into(); | ||
| Ok(agg_pub_key) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Validate crate features and BLSError variants exist
If you adopt the feature-gate/error approach, ensure a matching BLSError variant or compile-time guard exists, and that tests toggle the feature explicitly.
🏁 Script executed:
#!/bin/bash
# Verify BLSError variants and any feature flags across workspace
fd Cargo.toml -a | xargs -I{} rg -n 'features|insecure_bdn|bdn' {}
rg -nP -C2 'enum\s+BLSError\b'
rg -n 'BDN|insecure|rogue' -g '**/*.rs' -C2Length of output: 5304
🏁 Script executed:
#!/bin/bash
# Show BLSError enum variants in verifier module
sed -n '17,60p' blssig/src/verifier/mod.rs
# Show [features] section around line 35 in workspace Cargo.toml
sed -n '33,60p' Cargo.tomlLength of output: 1717
Add feature flag or error variant for unimplemented BDN aggregation
No insecure_bdn/bdn feature exists in Cargo.toml, and there’s no BLSError variant signaling “BDN not yet implemented.” Wrap the fallback implementations of aggregate_sigs/aggregate_pub_keys behind a feature guard (e.g. insecure_bdn) or introduce a dedicated error variant (e.g. BLSError::UnimplementedBDNAggregation) and update tests to toggle and cover both paths.
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 28 to 63, the current methods provide
fallback "standard BLS" aggregation but the crate lacks an `insecure_bdn`/`bdn`
Cargo feature and BLSError has no variant indicating BDN aggregation is
unimplemented; update the code to either (A) gate these fallback implementations
behind a feature flag (e.g. #[cfg(feature = "insecure_bdn")] for the methods and
add that feature to Cargo.toml) or (B) replace the fallback behavior with an
explicit error return (add BLSError::UnimplementedBDNAggregation or similar) and
return that error when the feature is not enabled; also update/add tests to
cover both paths (feature-enabled aggregation and feature-disabled error) and
ensure Cargo.toml and CI test matrix toggle the feature accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
blssig/src/verifier/mod.rs (2)
16-34: Preserve source deserialization errors for debuggingCarry the underlying
bls_signatures::Errorin the error variants and map it through.- #[error("failed to deserialize public key")] - PublicKeyDeserialization, + #[error("failed to deserialize public key: {0}")] + PublicKeyDeserialization(bls_signatures::Error), - #[error("failed to deserialize signature")] - SignatureDeserialization, + #[error("failed to deserialize signature: {0}")] + SignatureDeserialization(bls_signatures::Error),And:
- PublicKey::from_bytes(pub_key).map_err(|_| BLSError::PublicKeyDeserialization) + PublicKey::from_bytes(pub_key).map_err(BLSError::PublicKeyDeserialization)- Signature::from_bytes(sig).map_err(|_| BLSError::SignatureDeserialization) + Signature::from_bytes(sig).map_err(BLSError::SignatureDeserialization)
94-112: Fix cache return: assumes PublicKey: Copy; clone on hit/miss and avoid manual dropsThis path dereferences
&PublicKey(*cached_key) and returns a movedtyped_pub_keyafter inserting it, which requiresPublicKey: Copy. Prefer cloning to be robust and simpler locking.Apply:
- fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> { - // Check cache first - let mut cache = self.point_cache.write(); - if let Some(cached_key) = cache.get(pub_key) { - return Ok(*cached_key); - } - drop(cache); - - // Deserialize the public key - let typed_pub_key = self.deserialize_public_key(pub_key)?; - - // Cache it - let mut cache = self.point_cache.write(); - cache.insert(pub_key.to_vec(), typed_pub_key); - drop(cache); - - Ok(typed_pub_key) - } + fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> { + if let Some(pk) = self.point_cache.write().get(pub_key).cloned() { + return Ok(pk); + } + let pk = self.deserialize_public_key(pub_key)?; + self.point_cache.write().insert(pub_key.to_vec(), pk.clone()); + Ok(pk) + }
🧹 Nitpick comments (2)
blssig/src/verifier/mod.rs (2)
145-158: Tighten loop: preallocate, zip inputs, and avoid indexingSlightly cleaner and avoids repeated indexing bounds checks.
- let mut typed_pub_keys = vec![]; - let mut typed_sigs = vec![]; - for (i, pub_key) in pub_keys.iter().enumerate() { - if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH { - return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len())); - } - if sigs[i].len() != BLS_SIGNATURE_LENGTH { - return Err(BLSError::InvalidSignatureLength(sigs[i].len())); - } - - typed_pub_keys.push(self.get_or_cache_public_key(&pub_key.0)?); - typed_sigs.push(self.deserialize_signature(&sigs[i])?); - } + let mut typed_pub_keys = Vec::with_capacity(pub_keys.len()); + let mut typed_sigs = Vec::with_capacity(sigs.len()); + for (pk, sig_bytes) in pub_keys.iter().zip(sigs) { + if pk.0.len() != BLS_PUBLIC_KEY_LENGTH { + return Err(BLSError::InvalidPublicKeyLength(pk.0.len())); + } + if sig_bytes.len() != BLS_SIGNATURE_LENGTH { + return Err(BLSError::InvalidSignatureLength(sig_bytes.len())); + } + typed_pub_keys.push(self.get_or_cache_public_key(&pk.0)?); + typed_sigs.push(self.deserialize_signature(sig_bytes)?); + }
53-61: Constants look goodCentralizing lengths and cache cap is correct. Consider a brief comment estimating memory for MAX=10_000 (order of MiBs) and/or a builder to override in tests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
blssig/src/verifier/mod.rs(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
blssig/src/verifier/mod.rs (2)
blssig/src/bdn/mod.rs (1)
new(18-26)gpbft/src/api.rs (3)
verify(22-22)aggregate(32-32)verify_aggregate(45-50)
🔇 Additional comments (3)
blssig/src/verifier/mod.rs (3)
36-42: Doc claims full parity with go-f3 BDN; clarify limitationComment states this matches go-f3/blssig and uses BDN. Given BDN weighting isn’t implemented yet in this PR, please update the docs (and/or add a FIXME) to avoid misleading users. Optionally gate the BDN path behind a feature flag until complete.
85-92: Single-verify path looks solidGood use of stack arrays to avoid heap allocs; length checks cover the common pitfalls.
184-188: BDN weighting readiness check
aggregate_pub_keys()assumes BDN weighting. If weighting isn’t implemented yet inbdn,verify_aggregatewill not match go-f3 aggregates. Suggest a TODO here mirroring the crate-level note and an integration test vector once implemented.
blssig/src/verifier/mod.rs
Outdated
| let bdn = BDNAggregation::new(&typed_pub_keys)?; | ||
| let agg_sig = bdn.aggregate_sigs(&typed_sigs)?; | ||
| Ok(agg_sig.as_bytes()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return type mismatch: convert aggregated signature bytes to Vec
as_bytes() returns &[u8]; the function returns Vec<u8>.
- Ok(agg_sig.as_bytes())
+ Ok(agg_sig.as_bytes().to_vec())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let bdn = BDNAggregation::new(&typed_pub_keys)?; | |
| let agg_sig = bdn.aggregate_sigs(&typed_sigs)?; | |
| Ok(agg_sig.as_bytes()) | |
| } | |
| let bdn = BDNAggregation::new(&typed_pub_keys)?; | |
| let agg_sig = bdn.aggregate_sigs(&typed_sigs)?; | |
| Ok(agg_sig.as_bytes().to_vec()) | |
| } |
🤖 Prompt for AI Agents
In blssig/src/verifier/mod.rs around lines 160 to 163, the code returns
agg_sig.as_bytes() which yields a &[u8] but the function signature expects a
Vec<u8>; change the return to convert the slice into an owned vector (e.g.,
Ok(agg_sig.as_bytes().to_vec()) or .to_owned()) so the function returns a
Vec<u8> as declared.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
blssig/src/verifier/mod.rs (2)
95-105: Fix Copy/move bug in cache; return cloned key and avoid use-after-move
LruCache::getreturns&PublicKey;return Ok(*cached)assumesPublicKey: Copy. Also, returningtyped_pub_keyafter inserting moves it into the cache first. Clone for the caller and keep an owned copy in the cache.Apply:
- fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> { - // Check cache first - if let Some(cached) = self.point_cache.write().get(pub_key) { - return Ok(*cached); - } - - // Deserialize and cache - let typed_pub_key = self.deserialize_public_key(pub_key)?; - self.point_cache.write().insert(pub_key.to_vec(), typed_pub_key); - Ok(typed_pub_key) - } + fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> { + // Fast path: refresh LRU on hit; clone for caller + if let Some(pk) = self.point_cache.write().get(pub_key).cloned() { + return Ok(pk); + } + // Miss: deserialize outside of locks + let pk = self.deserialize_public_key(pub_key)?; + // Insert and return a clone + self.point_cache.write().insert(pub_key.to_vec(), pk.clone()); + Ok(pk) + }
153-156: Return owned Vec from aggregate()The function returns Vec but
as_bytes()yields&[u8].- Ok(agg_sig.as_bytes()) + Ok(agg_sig.as_bytes().to_vec())
🧹 Nitpick comments (2)
blssig/src/verifier/mod.rs (2)
138-151: Tighten loop: preallocate and iterate with zip to avoid indexing- let mut typed_pub_keys = vec![]; - let mut typed_sigs = vec![]; - for (i, pub_key) in pub_keys.iter().enumerate() { + let mut typed_pub_keys = Vec::with_capacity(pub_keys.len()); + let mut typed_sigs = Vec::with_capacity(sigs.len()); + for (pub_key, sig) in pub_keys.iter().zip(sigs) { if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH { return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len())); } - if sigs[i].len() != BLS_SIGNATURE_LENGTH { - return Err(BLSError::InvalidSignatureLength(sigs[i].len())); + if sig.len() != BLS_SIGNATURE_LENGTH { + return Err(BLSError::InvalidSignatureLength(sig.len())); } - typed_pub_keys.push(self.get_or_cache_public_key(&pub_key.0)?); - typed_sigs.push(self.deserialize_signature(&sigs[i])?); + typed_sigs.push(self.deserialize_signature(sig)?); }
59-67: Document expected cache memory footprintAdd a short comment estimating RAM usage at capacity (entries × key bytes × value bytes + overhead) to justify
MAX_POINT_CACHE_SIZEand aid tuning later.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
blssig/src/bdn/mod.rs(1 hunks)blssig/src/verifier/mod.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- blssig/src/bdn/mod.rs
🧰 Additional context used
🧬 Code graph analysis (1)
blssig/src/verifier/mod.rs (3)
blssig/src/bdn/mod.rs (1)
new(18-26)lightclient/src/lib.rs (1)
new(29-35)gpbft/src/api.rs (3)
verify(22-22)aggregate(32-32)verify_aggregate(45-50)
🔇 Additional comments (4)
blssig/src/verifier/mod.rs (4)
53-58: Constants match BLS12-381 compressed sizes
69-92: verify_single implementation looks solid
123-136: Input validation and cardinality checks are correct
119-121: Trait forwarding is clean
| let bdn = BDNAggregation::new(typed_pub_keys)?; | ||
| let agg_pub_key = bdn.aggregate_pub_keys()?; | ||
| let agg_pub_key_bytes = PubKey(agg_pub_key.as_bytes().to_vec()); | ||
| self.verify_single(&agg_pub_key_bytes, payload, agg_sig) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
BDN weighting likely incomplete — verify_aggregate may mismatch go-f3
If BDNAggregation doesn’t apply BDN coefficients, aggregated verification will diverge from go-f3. Confirm and gate with a clear TODO/warn until weighting lands.
Run to check for TODOs in BDN:
🏁 Script executed:
#!/bin/bash
rg -n -C2 -i 'BDN|coefficient|weight' blssig/src/bdn
rg -n -C2 -i 'TODO|FIXME' blssig/src/bdnLength of output: 2677
Gate aggregate verification until BDN weighting is implemented
At blssig/src/verifier/mod.rs:177–181, verify_aggregate calls BDNAggregation::aggregate_pub_keys(), which per blssig/src/bdn/mod.rs:6 currently uses plain BLS aggregation without coefficient weighting and will diverge from go-f3. Introduce an explicit guard (error or warning) before allowing aggregate verification until the TODOs at blssig/src/bdn/mod.rs:29,51 are resolved.
🤖 Prompt for AI Agents
In blssig/src/verifier/mod.rs around lines 177–181, the code proceeds to
aggregate public keys via BDNAggregation::aggregate_pub_keys() even though
blssig/src/bdn/mod.rs currently does plain BLS aggregation (no coefficient
weighting) and will diverge from go-f3; add an explicit guard here that rejects
aggregate verification (return an Err with a clear message or log and return
Err) until the weighting TODOs at blssig/src/bdn/mod.rs lines 29 and 51 are
implemented, e.g., check a feature flag or a boolean method on BDNAggregation
and if weighting is not implemented return a descriptive error like "aggregate
verification disabled: BDN weighting not implemented" instead of calling
aggregate_pub_keys and proceeding to verify_single.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
blssig/src/verifier/mod.rs (2)
96-108: Fix non-Copy misuse and move-after-insert in cache pathPublicKey is unlikely Copy; returning
*cachedand then insertingtyped_pub_keybefore returning it risks either compile errors or unnecessary moves. Clone on hit; insert a clone on miss and return owned.Apply:
- fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> { - // Check cache first - if let Some(cached) = self.point_cache.write().get(pub_key) { - return Ok(*cached); - } - - // Deserialize and cache - let typed_pub_key = self.deserialize_public_key(pub_key)?; - self.point_cache - .write() - .insert(pub_key.to_vec(), typed_pub_key); - Ok(typed_pub_key) - } + fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> { + // Fast path: refresh LRU and clone for caller + if let Some(pk) = self.point_cache.write().get(pub_key).cloned() { + return Ok(pk); + } + // Miss: deserialize outside lock, insert, return owned + let pk = self.deserialize_public_key(pub_key)?; + self.point_cache.write().insert(pub_key.to_vec(), pk.clone()); + Ok(pk) + }
156-159: Return Vec — not a slice
as_bytes()yields&[u8]; function returnsVec<u8>.- Ok(agg_sig.as_bytes()) + Ok(agg_sig.as_bytes().to_vec())
🧹 Nitpick comments (5)
deny.toml (2)
5-17: Scope CDLA to known crates; add Unicode-DFS-2016; keep allowlist minimal
- Prefer scoping CDLA-Permissive-2.0 via
exceptionsfor specific crates (e.g., webpki-roots) instead of globally allowing it. This aligns with cargo-deny guidance and reduces policy surface. (embarkstudios.github.io)- Many crates use Unicode-DFS-2016; consider adding it alongside Unicode-3.0 to avoid false negatives. Both are valid SPDX IDs. (spdx.org)
Suggested diff within this hunk:
allow = [ "Apache-2.0", "Apache-2.0 WITH LLVM-exception", "BSD-2-Clause", "CC0-1.0", "MIT", - "Unicode-3.0", + "Unicode-3.0", + "Unicode-DFS-2016", "Unlicense", "Zlib", - "CDLA-Permissive-2.0", "ISC", "BSD-3-Clause", ]And add (outside this hunk) targeted exceptions if applicable:
[licenses] exceptions = [ { allow = ["CDLA-Permissive-2.0"], crate = "webpki-roots" } ]If you actually don’t depend on any CDLA‑licensed crate, omit the exception entirely. Want me to scan Cargo.lock and propose a precise exceptions list?
21-21: Don’t blanket-allow duplicate crate versions
multiple-versions = "allow"suppresses duplicate-version findings and can hide bloat. Preferwarn(default) ordenyplus askip/skip-treefor unavoidable duplicates. (embarkstudios.github.io)Minimal change:
-[bans] -multiple-versions = "allow" +[bans] +multiple-versions = "warn"Optional hardening (outside this hunk):
[bans] wildcards = "deny"I can generate a script to list actual duplicate crates and draft a
skiplist—should I proceed?blssig/src/verifier/mod.rs (3)
141-154: Preallocate and iterate with zip to avoid index-based lookupsMinor perf/readability cleanups.
- let mut typed_pub_keys = vec![]; - let mut typed_sigs = vec![]; - for (i, pub_key) in pub_keys.iter().enumerate() { + let mut typed_pub_keys = Vec::with_capacity(pub_keys.len()); + let mut typed_sigs = Vec::with_capacity(sigs.len()); + for (pub_key, sig_bytes) in pub_keys.iter().zip(sigs) { if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH { return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len())); } - if sigs[i].len() != BLS_SIGNATURE_LENGTH { - return Err(BLSError::InvalidSignatureLength(sigs[i].len())); + if sig_bytes.len() != BLS_SIGNATURE_LENGTH { + return Err(BLSError::InvalidSignatureLength(sig_bytes.len())); } typed_pub_keys.push(self.get_or_cache_public_key(&pub_key.0)?); - typed_sigs.push(self.deserialize_signature(&sigs[i])?); + typed_sigs.push(self.deserialize_signature(sig_bytes)?); }
171-178: Tiny nit: preallocate signers capacity- let mut typed_pub_keys = vec![]; + let mut typed_pub_keys = Vec::with_capacity(signers.len());
180-184: BDN weighting gap — set expectations or gate aggregate verificationAggregate verification depends on BDN coefficient weighting in
BDNAggregation. If it’s still plain aggregation, results may diverge from go-f3. Add an explicit TODO/warn or feature-gated guard until weighting lands.- let bdn = BDNAggregation::new(typed_pub_keys)?; + // TODO: remove once BDN coefficient weighting is implemented in crate::bdn. + let bdn = BDNAggregation::new(typed_pub_keys)?;Would you like a follow-up patch that adds a
tracing::warn!here behind abdn-incompletefeature?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
blssig/src/bdn/mod.rs(1 hunks)blssig/src/verifier/mod.rs(1 hunks)deny.toml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- blssig/src/bdn/mod.rs
🧰 Additional context used
🧬 Code graph analysis (1)
blssig/src/verifier/mod.rs (4)
blssig/src/bdn/mod.rs (1)
new(18-24)lightclient/src/lib.rs (1)
new(29-35)certs/src/lib.rs (4)
new(97-124)verify(703-710)aggregate(712-718)verify_aggregate(720-727)gpbft/src/api.rs (3)
verify(22-22)aggregate(32-32)verify_aggregate(45-50)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Add F3 light client functionality
Usage examples:
Main changes
New crates
lightclient: main crate providing light client functionality for F3 fast finality protocol, allowing validation of F3 finality certificates without running a full F3 noderpc: simple HTTP JSON-RPC client for interacting with F3 endpoints, supporting certificate and power table retrievalmerkle: Merkle tree implementation following thego-f3/merklereference implementation, used forECChainkey generation (instead of the previous incompatible format)Additional core implementations
BLSVerifier: production BLS signature verifier followinggo-f3/blssigreference implementationPayload.serialize_for_signing: serialization matchinggo-f3reference formatTipset.serialize_for_signing: serialization matchinggo-f3reference formatPowerEntries.serialize_cbor: CBOR serialization matchinggo-f3reference format. the previousPowerEntryserialization was incompatible and so removedMissing functionality
The current BLS implementation uses standard aggregation rather than the BDN (Boneh-Drijvers-Neven) scheme, used by
go-f3, for preventing rogue public-key attacks. The expected signature verification error is temporarily silenced with warnings until the BDN scheme will be implemented.Summary by CodeRabbit
New Features
Improvements
Tests
Chores