Skip to content
This repository has been archived by the owner on May 20, 2024. It is now read-only.

Commit

Permalink
Merge pull request #12 from dragan2234/develop
Browse files Browse the repository at this point in the history
Merge banderwagon branch to develop
  • Loading branch information
kevaundray authored Oct 29, 2023
2 parents 4faf139 + 85dc2b4 commit c1bcf4c
Show file tree
Hide file tree
Showing 12 changed files with 245 additions and 176 deletions.
3 changes: 3 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ name = "ipa-multipoint"
version = "0.1.0"
edition = "2018"


# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
Expand All @@ -18,6 +19,8 @@ rand_chacha = { version = "0.3.0", default-features = false }
itertools = "0.10.1"
sha2 = "0.9.8"
hex = "0.4.3"
banderwagon = { git = "https://github.com/crate-crypto/banderwagon" }

[[bench]]
name = "benchmark_main"
harness = false
45 changes: 43 additions & 2 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A polynomial commitment scheme for opening multiple polynomials at different points using the inner product argument.

This library uses the bandersnatch curve and is described in [https://eprint.iacr.org/2021/1152.pdf].
This library uses the banderwagon prime group (https://hackmd.io/@6iQDuIePQjyYBqDChYw_jg/BJ2-L6Nzc) built on top of bandersnatch curve described in [https://eprint.iacr.org/2021/1152.pdf].


**Do not use in production.**
Expand All @@ -24,6 +24,8 @@ This library uses the bandersnatch curve and is described in [https://eprint.iac

## Tentative benchmarks

Bandersnatch (old):

Machine : 2.4 GHz 8-Core Intel Core i9

- To verify the opening of a polynomial of degree 255 (256 points in lagrange basis): `11.92ms`
Expand All @@ -37,4 +39,43 @@ Machine : 2.4 GHz 8-Core Intel Core i9
- To prove a multi-opening proof of 20,000 polynomials: `422.94ms`


These benchmarks are tentative because on one hand, the machine being used may not be the what the average user uses, while on the other hand, we have not optimised the verifier algorithm to remove `bH` , the pippenger algorithm does not take into consideration GLV and we are not using rayon to parallelise.

New benchmark on banderwagon subgroup: Apple M1 Pro 16GB RAM

- ipa - prove (256): `28.700 ms`

- ipa - verify (multi exp2 256): `2.1628 ms`

- ipa - verify (256): `20.818 ms`

- multipoint - verify (256)/1: `2.6983 ms`

- multipoint - verify (256)/1000: `8.5925 ms`

- multipoint - verify (256)/2000: `12.688 ms`

- multipoint - verify (256)/4000: `21.726 ms`

- multipoint - verify (256)/8000: `36.616 ms`

- multipoint - verify (256)/16000: `69.401 ms`

- multipoint - verify (256)/128000: `490.23 ms`

- multiproof - prove (256)/1: `33.231 ms`

- multiproof - prove (256)/1000: `47.764 ms`

- multiproof - prove (256)/2000: `56.670 ms`

- multiproof - prove (256)/4000: `74.597 ms`

- multiproof - prove (256)/8000: `114.39 ms`

- multiproof - prove (256)/16000: `189.94 ms`

- multiproof - prove (256)/128000: `1.2693 s`



*These benchmarks are tentative because on one hand, the machine being used may not be the what the average user uses, while on the other hand, we have not optimised the verifier algorithm to remove `bH` , the pippenger algorithm does not take into consideration GLV and we are not using rayon to parallelise.*
4 changes: 2 additions & 2 deletions benches/benchmarks/ipa_prove.rs
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
use ark_std::rand::SeedableRng;
use ark_std::UniformRand;
use bandersnatch::Fr;
use banderwagon::Fr;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use ipa_multipoint::ipa::create;
use ipa_multipoint::lagrange_basis::LagrangeBasis;
use ipa_multipoint::math_utils::powers_of;
use ipa_multipoint::multiproof::CRS;
use ipa_multipoint::crs::CRS;
use ipa_multipoint::transcript::Transcript;
use rand_chacha::ChaCha20Rng;

Expand Down
4 changes: 2 additions & 2 deletions benches/benchmarks/ipa_verify.rs
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
use ark_std::rand::SeedableRng;
use ark_std::UniformRand;
use bandersnatch::Fr;
use banderwagon::Fr;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use ipa_multipoint::ipa::create;
use ipa_multipoint::lagrange_basis::LagrangeBasis;
use ipa_multipoint::math_utils::{inner_product, powers_of};
use ipa_multipoint::multiproof::CRS;
use ipa_multipoint::crs::CRS;
use ipa_multipoint::transcript::Transcript;
use rand_chacha::ChaCha20Rng;

Expand Down
3 changes: 2 additions & 1 deletion benches/benchmarks/multipoint_prove.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
use ark_std::UniformRand;
use bandersnatch::Fr;
use banderwagon::Fr;
use criterion::BenchmarkId;
use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};
use ipa_multipoint::lagrange_basis::*;
use ipa_multipoint::multiproof::*;
use ipa_multipoint::transcript::Transcript;
use ipa_multipoint::crs::CRS;

pub fn criterion_benchmark(c: &mut Criterion) {
let mut group = c.benchmark_group("multiproof - prove (256)");
Expand Down
3 changes: 2 additions & 1 deletion benches/benchmarks/multipoint_verify.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
use ark_std::UniformRand;
use bandersnatch::Fr;
use banderwagon::Fr;
use criterion::BenchmarkId;
use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};
use ipa_multipoint::lagrange_basis::*;
use ipa_multipoint::multiproof::*;
use ipa_multipoint::transcript::Transcript;
use ipa_multipoint::crs::CRS;

pub fn criterion_benchmark(c: &mut Criterion) {
let mut group = c.benchmark_group("multipoint - verify (256)");
Expand Down
123 changes: 123 additions & 0 deletions src/crs.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
use ark_serialize::CanonicalSerialize;
use banderwagon::Element;

use crate::{ipa::slow_vartime_multiscalar_mul, lagrange_basis::LagrangeBasis};

#[derive(Debug, Clone)]
pub struct CRS {
pub n: usize,
pub G: Vec<Element>,
pub Q: Element,
}

impl CRS {
pub fn new(n: usize, seed: &'static [u8]) -> CRS {
// TODO generate the Q value from the seed also
// TODO: this will also make assert_dedup work as expected
// TODO: since we should take in `Q` too
let G: Vec<_> = generate_random_elements(n, seed).into_iter().collect();
let Q = Element::prime_subgroup_generator();

CRS::assert_dedup(&G);

CRS { n, G, Q }
}
// Asserts that not of the points generated are the same
fn assert_dedup(points: &[Element]) {
use std::collections::HashMap;
let mut map = HashMap::new();
for point in points {
assert!(
map.insert(point.to_bytes(), ()).is_none(),
"crs has duplicated points"
)
}
}
pub fn commit_lagrange_poly(&self, polynomial: &LagrangeBasis) -> Element {
slow_vartime_multiscalar_mul(polynomial.values().iter(), self.G.iter())
}
}

impl std::ops::Index<usize> for CRS {
type Output = Element;

fn index(&self, index: usize) -> &Self::Output {
&self.G[index]
}
}

fn generate_random_elements(num_required_points: usize, seed: &'static [u8]) -> Vec<Element> {
use ark_ec::group::Group;
use ark_ff::PrimeField;
use bandersnatch::Fq;
use sha2::{Digest, Sha256};

let choose_largest = false;

(0u64..)
.into_iter()
// Hash the seed + i to get a possible x value
.map(|i| {
let mut hasher = Sha256::new();
hasher.update(seed);
hasher.update(&i.to_be_bytes());
let bytes: Vec<u8> = hasher.finalize().to_vec();
bytes
})
// The Element::from_bytes method does not reduce the bytes, it expects the
// input to be in a canonical format, so we must do the reduction ourselves
.map(|hash_bytes| Fq::from_be_bytes_mod_order(&hash_bytes))
.map(|x_coord| {
let mut bytes = [0u8; 32];
x_coord.serialize(&mut bytes[..]).unwrap();
// TODO: this reverse is hacky, and its because there is no way to specify the endianness in arkworks
// TODO So we reverse it here, to be interopable with the banderwagon specs which needs big endian bytes
bytes.reverse();
bytes
})
// Deserialise the x-cordinate to get a valid banderwagon element
.map(|bytes| Element::from_bytes(&bytes))
.filter_map(|point| point)
.take(num_required_points)
.collect()
}

#[test]
fn crs_consistency() {
// TODO: update hackmd as we are now using banderwagon and the point finding strategy
// TODO is a bit different
// See: https://hackmd.io/1RcGSMQgT4uREaq1CCx_cg#Methodology
use ark_serialize::CanonicalSerialize;
use bandersnatch::Fq;
use sha2::{Digest, Sha256};

let points = generate_random_elements(256, b"eth_verkle_oct_2021");

let mut bytes = [0u8; 32];
points[0].serialize(&mut bytes[..]).unwrap();
assert_eq!(
hex::encode(&bytes),
"01587ad1336675eb912550ec2a28eb8923b824b490dd2ba82e48f14590a298a0",
"the first point is incorrect"
);
let mut bytes = [0u8; 32];
points[255].serialize(&mut bytes[..]).unwrap();
assert_eq!(
hex::encode(&bytes),
"3de2be346b539395b0c0de56a5ccca54a317f1b5c80107b0802af9a62276a4d8",
"the 256th (last) point is incorrect"
);

let mut hasher = Sha256::new();
for point in &points {
let mut bytes = [0u8; 32];
point.serialize(&mut bytes[..]).unwrap();
hasher.update(&bytes);
}
let bytes = hasher.finalize().to_vec();
assert_eq!(
hex::encode(&bytes),
"1fcaea10bf24f750200e06fa473c76ff0468007291fa548e2d99f09ba9256fdb",
"unexpected point encountered"
);
}
Loading

0 comments on commit c1bcf4c

Please sign in to comment.