Skip to content

Conversation

@gabearro
Copy link

Replace Mutex<HashMap> with DashMap to improve concurrent session access and simplify synchronization.
Switch from manual locking to DashMap's atomic operations in RBC implementations and restructure session management for efficiency.
Optimize lagrange_interpolate with parallel computation for large inputs using crossbeam.
Simplify and streamline NetworkErrorCode handling.
Enhance triple generation with parallel processing for batch execution.
Include batched RanSha functionality support. Integrate DashMap and crossbeam as new dependencies.

…nt session access and simplify synchronization. Switch from manual locking to DashMap's atomic operations in `RBC` implementations and restructure session management for efficiency. Optimize `lagrange_interpolate` with parallel computation for large inputs using crossbeam. Simplify and streamline `NetworkErrorCode` handling. Enhance triple generation with parallel processing for batch execution. Include batched `RanSha` functionality support. Integrate `DashMap` and `crossbeam` as new dependencies.
…tputs to prevent mismatched results across parties due to network delays. Transition to using `HashMap` for collected outputs and ensure sorted processing. Add diagnostic logging for interpolation steps and optimize robust interpolation error handling.
…ds for testing. Update threshold for sequential computation and integrate Criterion for performance analysis.
.map(|i| {
let exec_id = self.counters.triple_counter.get_next();
let round_id = (i % 256) as u8;
SessionId::new(ProtocolType::Triple, exec_id, 0, round_id, self.params.instance_id)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This increments the exec ID and the round ID at the same time, which limits us to 8 bits of iterations, since both are 8 bit fields. Instead, do a nested thing where for each exec ID, the round ID counts from 0 to 255 to make use of 16-bits in total. This was previously accomplished by something like

                if round_id == 255 {
                    triple_counter = self.counters.triple_counter.get_next();
                    round_id = 0;
                } else {
                    round_id += 1;
                }

in lines 1073, for example and was done in several locations.


while collected < num_triple_batches {
if let Some(sid) = triple_channel.lock().await.recv().await {
if triple_session_ids.contains(&sid) && !collected_triples.contains_key(&sid) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a session ID in an output channel is duplicate or unknown, then either something is broken or there are too many malicious nodes, so we should perhaps handle this case

  • either not at all, just put an assert
  • take some action when this happens, at least log this, at most panic

In any case, I think this should not just occur silently.

…eration, including support for batched message processing, deterministic session ID generation, and enhanced diagnostic logging. Refactor existing RanDouSha logic to accommodate both regular and batched modes, ensuring efficient share collection and output handling.
…otocols

- Introduced extensive benchmarking for core operations, including share generation, recovery, and share multiplication.
- Added tests for Lagrange interpolation, robust vs. non-robust recovery, and FFT vs. Lagrange for optimization analysis.
- Developed stress tests for RanDouSha, RanSha, and batched RanSha protocols to evaluate scalability and throughput under heavy workloads.
- Enhanced diagnostics with detailed performance summaries, memory usage estimation, and bottleneck analysis.
- Introduced `batch_ops` module with efficient implementations of common field operations (batched multiplication, addition, subtraction, scalar multiplication), as well as polynomial operations such as coefficient-wise addition, scalar multiplication, and summation.
- Optimized Vandermonde matrix computation and evaluation functions (single-point and batched) with parallel processing and Montgomery's trick for batch inversion to reduce field inversion overhead.
- Updated Lagrange interpolation and robust interpolation to leverage batched operations, improving performance by parallelizing computation and minimizing redundant field operations.
- Refactored double-share generation logic in RanDouSha and Batched RanDouSha to detect readiness for output phase earlier, avoid redundant locking, and better handle partially processed states.
- Enhanced logging granularity by integrating `trace` logs for finer debugging.
… batched share generation and RanDouSha; adjust test parameters for large-scale workloads
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants