Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
206e385
add light client functionality
moshababo Aug 13, 2025
1c58608
Merge branch 'main' into light_client
moshababo Aug 15, 2025
91590ae
post-merge fixes
moshababo Aug 15, 2025
6190ad1
handle RwLock errors
moshababo Aug 15, 2025
eb93f4a
bounds check
moshababo Aug 15, 2025
a6cf862
use ActorId
moshababo Aug 15, 2025
ca8bbc8
fix Cargo.toml
moshababo Aug 15, 2025
d0cb76b
tighten trait bounds
moshababo Aug 15, 2025
36a3327
un-pub fields
moshababo Aug 15, 2025
f305794
merkle to propagate error
moshababo Aug 15, 2025
3857f85
CI fixes
moshababo Aug 21, 2025
cf00ac9
CI fixes
moshababo Aug 21, 2025
57d5cd0
CI fixes
moshababo Aug 21, 2025
6cec2b2
use constants for BLS lengths
moshababo Aug 30, 2025
7a0a39b
use drop instead of scoping
moshababo Aug 30, 2025
7f9300c
use `hashlink::LruCache`
moshababo Aug 30, 2025
f2eb43c
remove `allow(unused)`
moshababo Aug 30, 2025
8b05095
network name to use enum
moshababo Aug 30, 2025
af28ab0
add `MERKLE_DIGEST_SIZE` const
moshababo Aug 30, 2025
a44a2f0
use `hex::encode`
moshababo Aug 30, 2025
95db86a
use `anyhow::ensure`
moshababo Aug 30, 2025
e27943d
CI fixes
moshababo Sep 3, 2025
8cffdc8
Merge branch 'main' into light_client
moshababo Sep 3, 2025
70bf5a5
add issue link
moshababo Sep 3, 2025
42cef93
remove unused deps
moshababo Sep 3, 2025
6bccb28
avoid redundant heap allocs
moshababo Sep 7, 2025
3608b91
bdn mod to receive typed args
moshababo Sep 7, 2025
51bd363
add `BLSError::EmptySignatures`
moshababo Sep 7, 2025
4f3b9e8
add inner errors
moshababo Sep 7, 2025
b2f6171
remove unneeded local variables
moshababo Sep 7, 2025
f905f26
use owned values
moshababo Sep 7, 2025
76d73d7
add comment for cache estimated size
moshababo Sep 7, 2025
0c68f0b
lint fixes
moshababo Sep 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .config/rust-f3.dic
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,12 @@ calibnet
Filecoin
merkle
Keccak256
RPC
BDN
F3
BLS12_381
G1
G2
JSON
CBOR
TODO
22 changes: 21 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[workspace]
resolver = "2"
members = ["blssig", "certs", "gpbft"]
members = ["blssig", "certs", "gpbft", "lightclient", "merkle", "rpc"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Workspace members added: ensure per-crate inheritance of workspace package fields

Nice addition of lightclient, merkle, and rpc to the workspace. Please verify each new crate’s Cargo.toml inherits the workspace’s package metadata (edition, rust-version, license, repository) to keep CI/tooling consistent. If not already present, add:

 [package]
 name = "lightclient"
 version = "0.1.0"
-# ...
+edition.workspace = true
+rust-version.workspace = true
+license.workspace = true
+repository.workspace = true

Apply the same in merkle/Cargo.toml and rpc/Cargo.toml. This avoids accidental drift (e.g., building with a different edition/MSRV locally vs CI).

Run this to confirm inheritance is set across all new crates:


🏁 Script executed:

#!/bin/bash
# Lists any package.tomls missing explicit workspace inheritance for key fields
fd Cargo.toml | grep -Ev '^\./Cargo\.toml$' | xargs -I{} sh -c '
  echo ">>> {}"
  rg -n "edition\.workspace\s*=\s*true|rust-version\.workspace\s*=\s*true|license\.workspace\s*=\s*true|repository\.workspace\s*=\s*true" -n {} || echo "  Missing one or more workspace field inheritances"
'

Length of output: 1089


Add missing rust-version.workspace inheritance in new crates

The verification script shows that while lightclient/Cargo.toml, merkle/Cargo.toml, and rpc/Cargo.toml each include:

[package]
repository.workspace = true
license.workspace = true
edition.workspace = true

they are missing the rust-version.workspace = true line. Please add this under the [package] section for each crate to ensure the workspace MSRV is consistently applied.

Affected files:

  • lightclient/Cargo.toml
  • merkle/Cargo.toml
  • rpc/Cargo.toml

Example diff for lightclient/Cargo.toml:

 [package]
 name = "lightclient"
 version = "0.1.0"
 repository.workspace = true
 license.workspace = true
 edition.workspace = true
+rust-version.workspace = true

Repeat the same insertion in merkle/Cargo.toml and rpc/Cargo.toml.

🤖 Prompt for AI Agents
In Cargo.toml (workspace root) around line 3 and in the three affected crate
manifests lightclient/Cargo.toml, merkle/Cargo.toml, and rpc/Cargo.toml under
their [package] sections, add rust-version.workspace = true alongside the
existing repository.workspace = true, license.workspace = true, and
edition.workspace = true entries so each crate inherits the workspace MSRV;
update each file by inserting that single line under [package] in the same style
and indentation as the other .workspace entries.


[workspace.package]
authors = ["ChainSafe Systems <forest@chainsafe.io>"]
Expand All @@ -12,6 +12,26 @@ rust-version = "1.85.0"
[workspace.dependencies]
ahash = "0.8"
anyhow = "1"
base32 = "0.5.1"
base64 = "0.22"
bls-signatures = { version = "0.15" }
bls12_381 = "0.8"
cid = { version = "0.10.1", features = ["std"] }
fvm_ipld_bitfield = "0.7.1"
fvm_ipld_encoding = "0.5"
hashlink = "0.10.0"
hex = "0.4"
jsonrpsee = { version = "0.24", features = ["ws-client", "http-client"] }
keccak-hash = "0.11"
num-bigint = { version = "0.4.6", features = ["serde"] }
num-traits = "0.2.19"
parking_lot = "0.12"
rand = "0.8"
serde = { version = "1", features = ["derive"] }
serde_cbor = "0.11.2"
serde_json = { version = "1", features = ["raw_value"] }
sha3 = "0.10.8"
strum = { version = "0.27.1", features = ["derive"] }
strum_macros = "0.27.1"
thiserror = "2"
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
check:
cargo check --quiet --no-default-features
cargo check --quiet --all-features
cargo check --quiet --no-default-features --target wasm32-unknown-unknown
cargo check --quiet --all-features --target wasm32-unknown-unknown
cargo check --quiet --no-default-features --target wasm32-unknown-unknown -p filecoin-f3-certs -p filecoin-f3-blssig -p filecoin-f3-merkle -p filecoin-f3-gpbft
cargo check --quiet --all-features --target wasm32-unknown-unknown -p filecoin-f3-certs -p filecoin-f3-blssig -p filecoin-f3-merkle -p filecoin-f3-gpbft

test:
cargo test --all-features
Expand Down Expand Up @@ -36,7 +36,7 @@ fmt:

clippy:
cargo clippy --all-targets --all-features --quiet --no-deps -- --deny=warnings
cargo clippy --all-features --target wasm32-unknown-unknown --quiet --no-deps -- --deny=warnings
cargo clippy --all-features --target wasm32-unknown-unknown --quiet --no-deps -p filecoin-f3-certs -p filecoin-f3-blssig -p filecoin-f3-merkle -p filecoin-f3-gpbft -- --deny=warnings

# Checks if all headers are present and adds if not
license:
Expand Down
7 changes: 7 additions & 0 deletions blssig/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,12 @@ edition.workspace = true
rust-version.workspace = true

[dependencies]
bls-signatures.workspace = true
bls12_381.workspace = true
filecoin-f3-gpbft = { path = "../gpbft" }
hashlink.workspace = true
parking_lot.workspace = true
thiserror.workspace = true

[dev-dependencies]
rand.workspace = true
62 changes: 62 additions & 0 deletions blssig/src/bdn/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
// Copyright 2019-2024 ChainSafe Systems
// SPDX-License-Identifier: Apache-2.0, MIT

//! BDN (Boneh-Drijvers-Neven) signature aggregation scheme, for preventing rogue public-key attacks.
//!
//! NOTE: currently uses standard BLS aggregation without coefficient weighting, hence returns incorrect values compared to go-f3.
//!
Comment on lines +4 to +7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Incomplete BDN implementation - returns incorrect results

The module acknowledges that it's using standard BLS aggregation instead of proper BDN coefficient weighting. This means the signature verification will fail when compared against go-f3's implementation. The PR objectives note this as a known issue with expected signature verification errors being temporarily silenced.

Would you like me to help implement the proper BDN coefficient weighting to match the go-f3 reference implementation?

use crate::verifier::BLSError;
use bls_signatures::{PublicKey, Signature};
use bls12_381::{G1Projective, G2Affine, G2Projective};

Comment on lines +8 to +11
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Verify cross-crate conversions (bls_signatures <-> bls12_381)

You’re converting bls_signatures::{Signature,PublicKey} into bls12_381::{G2Affine,G1Projective}. These From/Into impls may not exist; mixing curve types across crates often fails.

Run to confirm conversions or existing adapters:


🏁 Script executed:

#!/bin/bash
# Look for any local adapters/impls bridging bls_signatures <-> bls12_381
rg -nP -C2 'impl\s+(From|Into)<\s*bls_signatures::(Signature|PublicKey)\s*>\s+for\s+bls12_381::(G[12](Affine|Projective))'
rg -nP -C2 'impl\s+(From|Into)<\s*bls12_381::(G[12](Affine|Projective))\s*>\s+for\s+bls_signatures::(Signature|PublicKey)'
# Scan for helper conversion fns
rg -nP -C2 '(to|into|as)_(g1|g2)_(proj|affine)|from_(g1|g2)'

Length of output: 0


Add missing conversions between bls_signatures and bls12_381 types
No From/Into impls or helper functions exist for converting bls_signatures::PublicKeyG1Projective or bls_signatures::SignatureG2Affine/G2Projective. Define explicit From/Into impls or use crate-provided methods to bridge these types in blssig/src/bdn/mod.rs.

/// BDN aggregation context for managing signature and public key aggregation
pub struct BDNAggregation {
pub_keys: Vec<PublicKey>,
}

impl BDNAggregation {
pub fn new(pub_keys: Vec<PublicKey>) -> Result<Self, BLSError> {
if pub_keys.is_empty() {
return Err(BLSError::EmptyPublicKeys);
}

Ok(Self { pub_keys })
}

/// Aggregates signatures using standard BLS aggregation
/// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29
pub fn aggregate_sigs(&self, sigs: Vec<Signature>) -> Result<Signature, BLSError> {
if sigs.len() != self.pub_keys.len() {
return Err(BLSError::LengthMismatch {
pub_keys: self.pub_keys.len(),
sigs: sigs.len(),
});
}

// Standard BLS aggregation
let mut agg_point = G2Projective::identity();
for sig in sigs {
let sig: G2Affine = sig.into();
agg_point += sig;
}

// Convert back to Signature
let agg_sig: Signature = agg_point.into();
Ok(agg_sig)
}

/// Aggregates public keys using standard BLS aggregation
/// TODO: Implement BDN aggregation scheme: https://github.com/ChainSafe/rust-f3/issues/29
pub fn aggregate_pub_keys(&self) -> Result<PublicKey, BLSError> {
// Standard BLS aggregation
let mut agg_point = G1Projective::identity();
for pub_key in &self.pub_keys {
let pub_key_point: G1Projective = (*pub_key).into();
agg_point += pub_key_point;
}

// Convert back to PublicKey
let agg_pub_key: PublicKey = agg_point.into();
Ok(agg_pub_key)
}
}
21 changes: 9 additions & 12 deletions blssig/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,17 +1,14 @@
// Copyright 2019-2024 ChainSafe Systems
// SPDX-License-Identifier: Apache-2.0, MIT

pub fn add(left: usize, right: usize) -> usize {
left + right
}
//! BLS signature implementation using BDN aggregation scheme.
//!
//! This module implements the BLS signature scheme used in the Filecoin F3 protocol.
//! It uses the BLS12_381 curve with G1 for public keys and G2 for signatures.
//! The BDN (Boneh-Drijvers-Neven) scheme is used for signature and public key aggregation
//! to prevent rogue public-key attacks.

#[cfg(test)]
mod tests {
use super::*;
mod bdn;
mod verifier;

#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}
pub use verifier::{BLSError, BLSVerifier};
185 changes: 185 additions & 0 deletions blssig/src/verifier/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
// Copyright 2019-2024 ChainSafe Systems
// SPDX-License-Identifier: Apache-2.0, MIT

use bls_signatures::{PublicKey, Serialize, Signature, verify_messages};
use filecoin_f3_gpbft::PubKey;
use filecoin_f3_gpbft::api::Verifier;
use hashlink::LruCache;
use parking_lot::RwLock;
use thiserror::Error;

use crate::bdn::BDNAggregation;

#[cfg(test)]
mod tests;

#[derive(Error, Debug)]
pub enum BLSError {
#[error("empty public keys provided")]
EmptyPublicKeys,
#[error("empty signatures provided")]
EmptySignatures,
#[error("invalid public key length: expected {BLS_PUBLIC_KEY_LENGTH} bytes, got {0}")]
InvalidPublicKeyLength(usize),
#[error("failed to deserialize public key: {0}")]
PublicKeyDeserialization(bls_signatures::Error),
#[error("invalid signature length: expected {BLS_SIGNATURE_LENGTH} bytes, got {0}")]
InvalidSignatureLength(usize),
#[error("failed to deserialize signature: {0}")]
SignatureDeserialization(bls_signatures::Error),
#[error("BLS signature verification failed")]
SignatureVerificationFailed,
#[error("mismatched number of public keys and signatures: {pub_keys} != {sigs}")]
LengthMismatch { pub_keys: usize, sigs: usize },
}

/// BLS signature verifier using BDN aggregation scheme
///
/// This verifier implements the same scheme used by `go-f3/blssig`, with:
/// - BLS12_381 curve
/// - G1 for public keys, G2 for signatures
/// - BDN aggregation for rogue-key attack prevention
pub struct BLSVerifier {
/// Cache for deserialized public key points to avoid expensive repeated operations
point_cache: RwLock<LruCache<Vec<u8>, PublicKey>>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd change the signatures of the Verifier trait methods to use strongly-typed public key and signature to avoid conversions and such LRU perf optimization

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I followed the existing trait Verifier, which follows type Verifier interface from go-f3, who's using untyped params for all functions.

However, I agree that the bdn module can/should receive the typed data, so I changed that here: 3608b91

I'm not convinced though that Verifier should do the same, since it will merely push the validity checks, the deserialization and the caching to a higher layer, given that the data source is serialized.

}

impl Default for BLSVerifier {
fn default() -> Self {
Self::new()
}
}

/// BLS12-381 public key length in bytes
const BLS_PUBLIC_KEY_LENGTH: usize = 48;

/// BLS12-381 signature length in bytes
const BLS_SIGNATURE_LENGTH: usize = 96;

/// Maximum number of cached public key points to prevent excessive memory usage
const MAX_POINT_CACHE_SIZE: usize = 10_000;

impl BLSVerifier {
pub fn new() -> Self {
Self {
// key size: 48, value size: 196, total estimated: 1.83 MiB
point_cache: RwLock::new(LruCache::new(MAX_POINT_CACHE_SIZE)),
}
}

/// Verifies a single BLS signature
fn verify_single(&self, pub_key: &PubKey, msg: &[u8], sig: &[u8]) -> Result<(), BLSError> {
// Validate input lengths
if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH {
return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len()));
}
if sig.len() != BLS_SIGNATURE_LENGTH {
return Err(BLSError::InvalidSignatureLength(sig.len()));
}

// Get cached public key
let pub_key = self.get_or_cache_public_key(&pub_key.0)?;

// Deserialize signature
let signature = self.deserialize_signature(sig)?;

// Verify using bls-signatures
let msgs = [msg];
let pub_keys = [pub_key];
match verify_messages(&signature, &msgs, &pub_keys) {
true => Ok(()),
false => Err(BLSError::SignatureVerificationFailed),
}
}

/// Gets a cached public key or deserialize and caches it
fn get_or_cache_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> {
// Check cache first
if let Some(cached) = self.point_cache.write().get(pub_key) {
return Ok(*cached);
}

// Deserialize and cache
let typed_pub_key = self.deserialize_public_key(pub_key)?;
self.point_cache
.write()
.insert(pub_key.to_vec(), typed_pub_key);
Ok(typed_pub_key)
}

fn deserialize_public_key(&self, pub_key: &[u8]) -> Result<PublicKey, BLSError> {
PublicKey::from_bytes(pub_key).map_err(BLSError::PublicKeyDeserialization)
}

fn deserialize_signature(&self, sig: &[u8]) -> Result<Signature, BLSError> {
Signature::from_bytes(sig).map_err(BLSError::SignatureDeserialization)
}
}

impl Verifier for BLSVerifier {
type Error = BLSError;

fn verify(&self, pub_key: &PubKey, msg: &[u8], sig: &[u8]) -> Result<(), Self::Error> {
self.verify_single(pub_key, msg, sig)
}

fn aggregate(&self, pub_keys: &[PubKey], sigs: &[Vec<u8>]) -> Result<Vec<u8>, Self::Error> {
if pub_keys.is_empty() {
return Err(BLSError::EmptyPublicKeys);
}
if sigs.is_empty() {
return Err(BLSError::EmptySignatures);
}

if pub_keys.len() != sigs.len() {
return Err(BLSError::LengthMismatch {
pub_keys: pub_keys.len(),
sigs: sigs.len(),
});
}

// Validate all input lengths
let mut typed_pub_keys = vec![];
let mut typed_sigs = vec![];
for (i, pub_key) in pub_keys.iter().enumerate() {
if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH {
return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len()));
}
if sigs[i].len() != BLS_SIGNATURE_LENGTH {
return Err(BLSError::InvalidSignatureLength(sigs[i].len()));
}

typed_pub_keys.push(self.get_or_cache_public_key(&pub_key.0)?);
typed_sigs.push(self.deserialize_signature(&sigs[i])?);
}

let bdn = BDNAggregation::new(typed_pub_keys)?;
let agg_sig = bdn.aggregate_sigs(typed_sigs)?;
Ok(agg_sig.as_bytes())
}

fn verify_aggregate(
&self,
payload: &[u8],
agg_sig: &[u8],
signers: &[PubKey],
) -> Result<(), Self::Error> {
if signers.is_empty() {
return Err(BLSError::EmptyPublicKeys);
}

let mut typed_pub_keys = vec![];
for pub_key in signers {
if pub_key.0.len() != BLS_PUBLIC_KEY_LENGTH {
return Err(BLSError::InvalidPublicKeyLength(pub_key.0.len()));
}

typed_pub_keys.push(self.get_or_cache_public_key(&pub_key.0)?);
}

let bdn = BDNAggregation::new(typed_pub_keys)?;
let agg_pub_key = bdn.aggregate_pub_keys()?;
let agg_pub_key_bytes = PubKey(agg_pub_key.as_bytes().to_vec());
self.verify_single(&agg_pub_key_bytes, payload, agg_sig)
}
Comment on lines +180 to +184
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

BDN weighting likely incomplete — verify_aggregate may mismatch go-f3

If BDNAggregation doesn’t apply BDN coefficients, aggregated verification will diverge from go-f3. Confirm and gate with a clear TODO/warn until weighting lands.

Run to check for TODOs in BDN:


🏁 Script executed:

#!/bin/bash
rg -n -C2 -i 'BDN|coefficient|weight' blssig/src/bdn
rg -n -C2 -i 'TODO|FIXME' blssig/src/bdn

Length of output: 2677


Gate aggregate verification until BDN weighting is implemented
At blssig/src/verifier/mod.rs:177–181, verify_aggregate calls BDNAggregation::aggregate_pub_keys(), which per blssig/src/bdn/mod.rs:6 currently uses plain BLS aggregation without coefficient weighting and will diverge from go-f3. Introduce an explicit guard (error or warning) before allowing aggregate verification until the TODOs at blssig/src/bdn/mod.rs:29,51 are resolved.

🤖 Prompt for AI Agents
In blssig/src/verifier/mod.rs around lines 177–181, the code proceeds to
aggregate public keys via BDNAggregation::aggregate_pub_keys() even though
blssig/src/bdn/mod.rs currently does plain BLS aggregation (no coefficient
weighting) and will diverge from go-f3; add an explicit guard here that rejects
aggregate verification (return an Err with a clear message or log and return
Err) until the weighting TODOs at blssig/src/bdn/mod.rs lines 29 and 51 are
implemented, e.g., check a feature flag or a boolean method on BDNAggregation
and if weighting is not implemented return a descriptive error like "aggregate
verification disabled: BDN weighting not implemented" instead of calling
aggregate_pub_keys and proceeding to verify_single.

}
Loading
Loading