Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EIP-2537 Breakout Call #1176

Open
timbeiko opened this issue Oct 10, 2024 · 8 comments
Open

EIP-2537 Breakout Call #1176

timbeiko opened this issue Oct 10, 2024 · 8 comments

Comments

@timbeiko
Copy link
Collaborator

timbeiko commented Oct 10, 2024

Meeting Info

Agenda

@ralexstokes
Copy link
Member

AIUI there is question of how to price the MSM precompiles given the fact that the MSM computation assumes parallel computation via multiple cores. This raises the question of how to do proper benchmarking as we don't have a multicore machine model for EVM execution.

After some discussion with Barnabus, Kev, and Ansgar, I would propose we assume a machine with 4 cores to use for MSM computation and use this to determine correct gas pricing.

@chfast
Copy link
Member

chfast commented Oct 11, 2024

After some discussion with Barnabus, Kev, and Ansgar, I would propose we assume a machine with 4 cores to use for MSM computation and use this to determine correct gas pricing.

We (evmone/Silkworm) will not use multi-threaded execution, at least not initially. We don't want to ship a thread-pool with a precompile. We will benchmark what we have to see if this causes any serious issues so you can decide to do otherwise. However, geth has been rather against multi-threaded execution too (@jwasinger).

@jwasinger
Copy link

I would like to propose that we consider reducing the price of the non-msm precompiles as well. I am using Geth's ecrecover precompile performance as a baseline here, to come up with a repricing that would be appropriate for us.

Based on my benchmarks of all the precompiles in Geth/Gnark on presumed worst-case inputs, I have come up with the following precompile-to-ecrecover performance ratios on the machines that I ran benchmarks on:

g1add:
        mbp m2: 2.74
        xeon8280: 2.03
g1mul:
        mbp m2: 1.512
        xeon8280: 1.43
g2add:
        mbp m2: 2.268
        xeon8280: 2.0313
g2mul:
        mbp m2: 2.29921
        xeon8280: 2.77
mapfp:
        mbp m2: 1.669
        xeon8280: 1.548
mapfp2:
        mbp m2: 4.2205
        xeon8280: 5.262
pairing (8 pairs):
        mbp m2: 1.732
        xeon8280: 2.3022

Based on this, I would suggest reducing the precompiles by multiplying the current costs by the following factors:

g1add: 0.5
g1mul: 0.7
g2add: 0.5
g2mul: 0.5
mapfp: 0.55
mapfp2: 0.25

This is just food for a thought, but if we are going to reprice MSM we should consider repricing all the precompiles that have static cost models as well.

@jwasinger
Copy link

However, geth has been rather against multi-threaded execution too (@jwasinger).

We are against basing the cost model off of multi-threaded execution, but Geth will use multiple threads when running these precompiles in production.

@chfast
Copy link
Member

chfast commented Oct 11, 2024

My list of questions for the call.

  1. What is the benchmark baseline? ECRECOVER?
  2. What is the range for MSM to analyze? We used 256 points. Can other implementations provide timings for such range?
  3. What is the worst case benchmark for MUL?
  4. What is the worst case benchmark for MSM?
  5. What are the libraries used for implementations? Which clients use which library?
  6. What is the relative cost of the subgroup check in MUL?
  7. What is the relative performance improvement of endomorphism in MUL?

@chfast
Copy link
Member

chfast commented Oct 11, 2024

@mratsim
Copy link

mratsim commented Oct 14, 2024

@poojaranjan
Copy link
Contributor

Recording: https://youtu.be/zUIogzxkTpc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants