-
Notifications
You must be signed in to change notification settings - Fork 11.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Modularize benches #1321
WIP: Modularize benches #1321
Conversation
oxade
commented
Apr 12, 2022
•
edited
Loading
edited
- Re-organize benchmarks
- Use common functions for benchmarks
- Change the way we measure latency: now we run a load gen and also send one-shot tracers to measure the latency
Codecov Report
@@ Coverage Diff @@
## main #1321 +/- ##
==========================================
+ Coverage 81.71% 82.64% +0.93%
==========================================
Files 100 103 +3
Lines 20966 20743 -223
==========================================
+ Hits 17133 17144 +11
+ Misses 3833 3599 -234
Continue to review full report at Codecov.
|
use tracing::subscriber::set_global_default; | ||
use tracing_subscriber::EnvFilter; | ||
|
||
fn main() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the input parameters that this function is expecting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It uses CLI args. Am I supposed to specify something here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where are the cli arguments defined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sui/src/benchmark.rs
Outdated
let receiver = authority_client | ||
.handle_batch_streaming_as_stream(BatchInfoRequest { | ||
start, | ||
end: start + 10_000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the 10_000?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was added by @gdanezis
I simply ported it over.
In reality we want to benchmark this follower logic more thoroughly but thats still in the works
sui/src/benchmark.rs
Outdated
let runtime = Builder::new_multi_thread() | ||
.enable_all() | ||
.thread_stack_size(32 * 1024 * 1024) | ||
.worker_threads(usize::min(num_cpus::get(), 24)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depending on where this will be run, we might get more reproducible results if we use, say, half the CPUs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Also the authority should probably use only half as well, looks like its using all cores right now)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried with half and still got same level of variance