-
Couldn't load subscription status.
- Fork 2
Implement BDN signature aggregation scheme #34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a BDN aggregation implementation: deterministic Blake2x-based per-key coefficients and precomputed terms in BDNAggregation, changes aggregation and verifier APIs to accept a power table plus signer indices, adds caching and new error variants, updates callers/tests, workspace dependencies, and three config dictionary entries. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller as Caller
participant Certs as certs::verify_signature
participant Ver as blssig::BLSVerifier
participant BDN as blssig::bdn::BDNAggregation
Caller->>Certs: verify_signature(payload, agg_sig, power_table, signer_indices)
Certs->>Ver: verify_aggregate(payload, agg_sig, power_table, signer_indices)
activate Ver
Ver->>Ver: get_or_cache_bdn(power_table)
alt cache miss
Ver->>BDN: BDNAggregation::new(power_table)
activate BDN
Note right of BDN #D1FFD6: Blake2xs XOF -> per-key coefficients\nConvert chunks -> Scalars (validate)\nCompute terms = (pk * coef) + pk
BDN-->>Ver: BDNAggregation{coefficients, terms}
deactivate BDN
end
Ver->>BDN: aggregate_pub_keys(signer_indices)
BDN-->>Ver: aggregated_pub_key
Ver->>BDN: aggregate_sigs(indices, sigs)
BDN-->>Ver: aggregated_signature
Ver-->>Certs: Ok / Err(BLSError)
deactivate Ver
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Potential focal points:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
🧰 Additional context used🧬 Code graph analysis (3)blssig/src/verifier/tests.rs (2)
blssig/src/bdn/mod.rs (1)
blssig/src/verifier/mod.rs (3)
🔇 Additional comments (8)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
blssig/src/bdn/mod.rs (2)
31-39: Precompute terms once; OK. Consider storing projective terms for perf.Logic is correct. For speed (1k+ keys), store G1Projective terms to avoid repeated decode per aggregation.
120-129: Minor: preallocate and store projective for speed.Allocate with capacity and consider storing G1Projective to avoid re-decode in aggregate.
- pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> { - let mut terms = vec![]; + pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> { + let mut terms = Vec::with_capacity(pub_keys.len()); for (i, pub_key) in pub_keys.iter().enumerate() { let pub_key_point: G1Projective = (*pub_key).into(); let pub_c = pub_key_point * coefficients[i]; let term = pub_c + pub_key_point; terms.push(term.into()); } terms }certs/src/lib.rs (1)
296-305: Correct: pass full power table and signer indices.This matches the updated Verifier API and BDN requirements. Minor: this clones all keys; unavoidable given &[PubKey] API.
blssig/src/verifier/mod.rs (3)
36-39: Pre-validate signer_indices and handle u64→usize conversion to avoid panics and wasted worksigner_indices is &[u64] but indices for slices/Vecs are usize; very large u64s can overflow on cast. Also, early range checks avoid building BDNAggregation when indices are out of range and return precise errors.
Apply near Line 180:
if signer_indices.is_empty() { return Err(BLSError::EmptySigners); } + + // Early validation of indices to avoid overflow/panic and wasted work + let n = power_table.len(); + for &idx in signer_indices { + // Guard u64 -> usize conversion on 32-bit targets and ensure in-bounds + let idx_usize: usize = idx + .try_into() + .map_err(|_| BLSError::SignerIndexOutOfRange(usize::MAX))?; + if idx_usize >= n { + return Err(BLSError::SignerIndexOutOfRange(idx_usize)); + } + }Optional: if BDNAggregation::aggregate_pub_keys already performs these checks safely, you can keep one source of truth but still consider the early check to short-circuit and return the intended BLSError variant faster.
Also applies to: 171-179
182-188: Minor: pre-allocate capacity for typed_pub_keysSmall perf win: avoid reallocations when mirroring power_table.
- let mut typed_pub_keys = vec![]; + let mut typed_pub_keys = Vec::with_capacity(power_table.len());
191-193: Performance: consider caching BDNAggregation per power tableBDN setup over the full power table is likely the hot path. Cache precomputed state keyed by a stable hash of power_table (e.g., concatenated bytes or a Merkle root) to amortize cost across certificates within the same epoch/validator set. Keep using the point cache for deserialization; this would add another layer for BDN precomputation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.config/rust-f3.dic(1 hunks)Cargo.toml(1 hunks)blssig/Cargo.toml(1 hunks)blssig/src/bdn/mod.rs(3 hunks)blssig/src/verifier/mod.rs(4 hunks)certs/src/lib.rs(2 hunks)gpbft/src/api.rs(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
gpbft/src/api.rs (3)
blssig/src/verifier/mod.rs (1)
verify_aggregate(167-194)certs/src/lib.rs (1)
verify_aggregate(712-720)lightclient/src/lib.rs (1)
power_table(39-42)
blssig/src/verifier/mod.rs (1)
lightclient/src/lib.rs (1)
power_table(39-42)
certs/src/lib.rs (1)
lightclient/src/lib.rs (1)
power_table(39-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: All lint checks
🔇 Additional comments (6)
.config/rust-f3.dic (1)
45-47: LGTM!The addition of
coef_i,sig_i, andpub_key_ito the dictionary is appropriate for the BDN aggregation implementation, which introduces per-key coefficients and indexed signatures/public keys.blssig/Cargo.toml (1)
13-13: Workspace dep OK; ensure features come through and pin upstream rev at root.Enabling blake2 via workspace is fine. Since the root sets features, nothing else needed here. No action in this file.
blssig/src/bdn/mod.rs (1)
41-59: Weighted signature aggregation may not interop; verify expected input.If upstream producers aggregate unweighted signatures (common BDN flow), scaling sigs by (c_i+1) will produce a different aggregate. Keep this method if you also control aggregation, but ensure verify paths assume unweighted sigs (as implemented elsewhere).
gpbft/src/api.rs (1)
36-46: Docs and signature update LGTM.Clearer API: requires power_table plus signer_indices for BDN. Good.
certs/src/lib.rs (1)
712-718: MockVerifier updated correctly.Signature matches trait; tests should compile.
blssig/src/verifier/mod.rs (1)
20-22: Good addition: explicit EmptySigners errorClearer failure mode for empty signer sets. LGTM.
| coef_i | ||
| sig_i | ||
| pub_key_i | ||
| + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the stray + character.
The + character does not belong in a spell-check dictionary file. Dictionary files typically contain one term per line without special characters.
Apply this diff to remove it:
-+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| + | |
| # (line 48 removed; no content remains here) |
🤖 Prompt for AI Agents
In .config/rust-f3.dic around line 48, remove the stray "+" character that was
accidentally added; open the file, delete the lone "+" on line 48 so the
dictionary contains only valid entries (one term per line), save the file, and
ensure no other stray punctuation remains.
| pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> { | ||
| let mut hasher = Blake2xs::new(0xFFFF); | ||
|
|
||
| // Hash all public keys | ||
| for pub_key in pub_keys { | ||
| let bytes = pub_key.as_bytes(); | ||
| hasher.update(&bytes); | ||
| } | ||
|
|
||
| // Read 16 bytes per public key | ||
| let mut reader = hasher.finalize_xof(); | ||
| let mut output = vec![0u8; pub_keys.len() * 16]; | ||
| reader.read(&mut output); | ||
|
|
||
| // Convert every consecutive 16 bytes chunk to a scalar | ||
| let mut coefficients = Vec::with_capacity(pub_keys.len()); | ||
| for i in 0..pub_keys.len() { | ||
| let chunk = &output[i * 16..(i + 1) * 16]; | ||
|
|
||
| // Convert 16 bytes to 32 bytes, for scalar (pad with zeros) | ||
| let mut bytes_32 = [0u8; 32]; | ||
| bytes_32[..16].copy_from_slice(chunk); | ||
|
|
||
| // BLS12-381 scalars expects little-endian byte representation | ||
| let scalar = Scalar::from_bytes(&bytes_32); | ||
| if scalar.is_some().into() { | ||
| coefficients.push(scalar.unwrap()); | ||
| } else { | ||
| return Err(BLSError::InvalidScalar); | ||
| } | ||
| } | ||
|
|
||
| Ok(coefficients) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Coefficient derivation: tighten XOF usage, domain separation, and scalar mapping.
- Use exact out_len, not 0xFFFF.
- Add a domain separation tag.
- Consider including the message per BDN variant (compatibility dependent).
- Derive scalars uniformly via from_bytes_wide with 64 bytes per coeff (128‑bit is OK but sub‑optimal).
Apply:
- pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
- let mut hasher = Blake2xs::new(0xFFFF);
+ pub fn calc_coefficients(pub_keys: &[PublicKey]) -> Result<Vec<Scalar>, BLSError> {
+ // 64 bytes per coefficient; uniform reduction via from_bytes_wide
+ let out_len = pub_keys.len() * 64;
+ let mut hasher = Blake2xs::new(out_len);
- // Hash all public keys
+ // Domain separation + all public keys (compressed)
+ hasher.update(b"F3-BDN-COEFF-V1");
for pub_key in pub_keys {
let bytes = pub_key.as_bytes();
hasher.update(&bytes);
}
- // Read 16 bytes per public key
let mut reader = hasher.finalize_xof();
- let mut output = vec![0u8; pub_keys.len() * 16];
+ let mut output = vec![0u8; out_len];
reader.read(&mut output);
- // Convert every consecutive 16 bytes chunk to a scalar
let mut coefficients = Vec::with_capacity(pub_keys.len());
- for i in 0..pub_keys.len() {
- let chunk = &output[i * 16..(i + 1) * 16];
-
- // Convert 16 bytes to 32 bytes, for scalar (pad with zeros)
- let mut bytes_32 = [0u8; 32];
- bytes_32[..16].copy_from_slice(chunk);
-
- // BLS12-381 scalars expects little-endian byte representation
- let scalar = Scalar::from_bytes(&bytes_32);
- if scalar.is_some().into() {
- coefficients.push(scalar.unwrap());
- } else {
- return Err(BLSError::InvalidScalar);
- }
+ for i in 0..pub_keys.len() {
+ let chunk = &output[i * 64..(i + 1) * 64];
+ let mut wide = [0u8; 64];
+ wide.copy_from_slice(chunk);
+ let scalar = Scalar::from_bytes_wide(&wide); // uniform mod r
+ coefficients.push(scalar);
}
Ok(coefficients)
}Verification needed: does go-f3 include message in the coefficient hash (L vs L||msg)? Aligning is critical for cross‑impl compatibility.
🌐 Web query:
In go-f3's BDN multi-signature implementation, are coefficients derived from H(L) only, or H(L || msg)? Provide file/line refs if possible.
💡 Result:
Short answer: coefficients are derived from H(L) only (i.e., from the public-key list), not H(L || msg).
Evidence:
- kyber's BDN docs/state that AggregateSignatures uses c = H(pk) (coefficient = hash of the public key / key list). [1]
- go-f3 uses a BDN-style BLS implementation in its blssig module (BDN aggregation is used for multisig). [2][3]
References:
- kyber bdn package docs — AggregateSignatures: "aggregates the signatures using a coefficient for each one of them where c = H(pk)". (pkg: github.com/.../kyber/sign/bdn). [1]
- go-f3 repository — blssig module (uses BDN/BDN-style aggregation). (go-f3/blssig). [2]
- Filecoin / bls/BDN references in Filecoin docs (BLS aggregation / BDN multisig). [3]
If you want I can open the exact kyber bdn.go source and paste the exact file+line reference.
Tighten XOF usage and scalar mapping; maintain BDN compatibility
- Use exact out_len = pub_keys.len() * 64 and initialize Blake2xs with that.
- Prepend a clear domain-separation tag (e.g.
b"F3-BDN-COEFF-V1") before hashing keys. - Read 64 bytes per key and derive scalars via
Scalar::from_bytes_widefor uniform reduction. - Derive coefficients from H(L) only (no message) to match go-f3’s BDN (c = H(pk list) only).
🤖 Prompt for AI Agents
In blssig/src/bdn/mod.rs around lines 85-118, tighten XOF usage and scalar
mapping: initialize the Blake2xs XOF to produce out_len = pub_keys.len() * 64
bytes (not 0xFFFF), prepend a domain-separation tag (e.g. b"F3-BDN-COEFF-V1") to
the hasher before updating with each pubkey, finalize the XOF and read 64 bytes
per public key into the output buffer, and for each 64-byte chunk derive the
coefficient using Scalar::from_bytes_wide (instead of padding 16→32 and
from_bytes) so reduction is uniform; return InvalidScalar on any mapping failure
and ensure only H(L) (the hashed list with the domain tag) is used to derive
coefficients (no extra message).
| anyhow = "1" | ||
| base32 = "0.5.1" | ||
| base64 = "0.22" | ||
| blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pin the Git dependency to a commit for reproducible builds and supply‑chain safety.
Tracking a branch is brittle and risky. Pin to a specific commit (rev) and add a short comment with the PR link.
Apply:
-blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] }
+blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "<commit-sha>", features = ["blake2x"] } # blake2x PR: https://github.com/RustCrypto/hashes/pull/704Also consider moving this under [patch.crates-io] to scope the override explicitly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| blake2 = { git = "https://github.com/huitseeker/hashes.git", branch = "blake2x-pr", features = ["blake2x"] } | |
| blake2 = { git = "https://github.com/huitseeker/hashes.git", rev = "<commit-sha>", features = ["blake2x"] } # blake2x PR: https://github.com/RustCrypto/hashes/pull/704 |
🤖 Prompt for AI Agents
In Cargo.toml at line 17, the blake2 dependency currently tracks a branch which
is non‑reproducible; replace the branch pin with a specific commit by using rev
= "<commit-sha>" instead of branch, add a short inline comment with the upstream
PR/PR URL that introduced the change, and consider moving this entry under a
[patch.crates-io] section to scope the override to the registry; update the
lockfile afterwards (cargo update -p blake2 --precise <version> or cargo
generate-lockfile) to ensure reproducible builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
blssig/src/bdn/mod.rs (2)
67-84: Avoid lossy u64→usize cast; check bounds in u64 first.This is the same issue previously identified: on 32-bit targets, casting a large u64 to usize before the bounds check could result in truncation, allowing an out-of-bounds access.
86-119: Tighten XOF usage, domain separation, and scalar mapping.This is the same issue previously identified: the coefficient derivation should use an exact output length, include a domain separation tag, read 64 bytes per coefficient, and use
Scalar::from_bytes_widefor uniform reduction to align with cryptographic best practices and ensure compatibility with go-f3.
🧹 Nitpick comments (2)
blssig/src/bdn/mod.rs (1)
121-133: Add bounds checking or restrict visibility.The function is public but performs unchecked indexing of
coefficients[i], which will panic if the slices have mismatched lengths. Consider either:
- Adding a length check and returning
Result<Vec<PublicKey>, BLSError>- Marking the function as
pub(crate)if it's only intended for internal useApply this diff to add bounds checking:
pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> { + assert_eq!( + pub_keys.len(), + coefficients.len(), + "pub_keys and coefficients must have the same length" + ); pub_keys .par_iter()Or restrict visibility:
- pub fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> { + pub(crate) fn calc_terms(pub_keys: &[PublicKey], coefficients: &[Scalar]) -> Vec<PublicKey> {blssig/src/verifier/mod.rs (1)
129-155: Cache logic is correct; consider optimization for large power tables.The caching implementation correctly handles concurrent access and leverages the existing point cache for individual keys.
For large power tables (~1560 entries, ~75KB), the Vec equality comparison on line 133 could be expensive. Consider hashing the power table as the cache key:
use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; // In struct: bdn_cache: RwLock<Option<(u64, Arc<BDNAggregation>)>>, // In get_or_cache_bdn: let mut hasher = DefaultHasher::new(); power_table.hash(&mut hasher); let key = hasher.finish(); if let Some((cached_key, cached_bdn)) = self.bdn_cache.read().as_ref() { if *cached_key == key { return Ok(cached_bdn.clone()); } } // ... cache with (key, bdn)However, given the comment that power tables rarely repeat, the current approach is acceptable.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
Cargo.toml(2 hunks)blssig/Cargo.toml(1 hunks)blssig/src/bdn/mod.rs(3 hunks)blssig/src/verifier/mod.rs(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- Cargo.toml
- blssig/Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (1)
blssig/src/verifier/mod.rs (1)
blssig/src/bdn/mod.rs (1)
new(27-40)
🔇 Additional comments (5)
blssig/src/bdn/mod.rs (3)
1-24: LGTM: Imports and struct definition are appropriate.The Blake2Xs import for XOF-based coefficient derivation, bls12_381 types for elliptic curve operations, and rayon for parallelism are all suitable for the BDN implementation. The struct fields are appropriately scoped as
pub(crate).
26-40: LGTM: Construction logic is sound.The constructor correctly validates inputs, precomputes coefficients and terms for efficient aggregation, and handles errors appropriately.
42-65: LGTM: Signature aggregation correctly implements BDN.The implementation correctly applies coefficients in G2 projective space, computing
sum((coef_i + 1) * sig_i)as required by the BDN scheme.blssig/src/verifier/mod.rs (2)
1-79: LGTM: Cache structure and error types are appropriate.The Arc-wrapped BDN cache with single-entry semantics is reasonable given the comment that power tables rarely repeat. The new error variants properly support the BDN operations.
200-218: LGTM: Verification logic correctly implements BDN requirements.The updated API properly uses the full power table for BDN coefficient computation (as required by the scheme) while aggregating only the specified signer indices. The caching integration is efficient.
Implement BDN signature aggregation scheme
Closes #29
This PR implements the BDN aggregation scheme for BLS signatures, fixing certificate verification failures in the F3 lightclient.
Due to the lack of Rust implementation for Blake2X XOF (specifically the 32-bit Blake2Xs variant), this PR uses the
pending PR RustCrypto/hashes#704
Changes
Verifier::verify_aggregate()API to accept full power table and signer indices (BDNrequires all keys for correct coefficient computation)
Testing
Verified correctness using the
certificates_validationexample:Summary by CodeRabbit
New Features
Refactor
Bug Fixes
Documentation
Tests