-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prescientfuzz testing #1982
Prescientfuzz testing #1982
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Thanks for submitting a PR, @DanBlackwell! This makes our work a lot easier : ) Once it is ready, we can use the |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-10_prescientfuzz_init --fuzzers libafl aflplusplus prescientfuzz honggfuzz libfuzzer |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-10-prescientfuzz_init --fuzzers prescientfuzz |
@DonggeLiu Has this failed to build? I can't see anything in that CI log |
Experiment |
Yes, I failed to notice that the experiment name does not match this pattern: |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-prescientfuzz-init --fuzzers prescientfuzz |
Ok, still some dying from memory starvation. I think I have it fixed now; any chance you could rerun that exact setup for me @DonggeLiu ? Oh, Is there any caching in the docker setup? I've only updated the fuzzer source repo, so if docker caches the build images it probably won't fetch the updated version. |
I vaguely recall that this has caused problems before. I am happy to re-run the experiment when you are ready, please feel free to ping me. |
Ok, have manually specified the commit number which should trash the cache. All ready to go! |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-16-prescientfuzz-init --fuzzers prescientfuzz |
Experiment |
I forgot that it needs |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-17-prescientfuzz-init --fuzzers prescientfuzz |
Experiment |
Hi @DonggeLiu ; any chance you can restart it? Just patched another bug sorry. |
Sure! I've terminated all instances of the previous experiment and approved the CIs. |
The CI looks ok to me, and I ran one of the previously failing benchmarks through the debug-builder earlier. I'm hoping this run should have everything working finally; I appreciate your patience! (I'm trying to build a global CFG without the LTO pass - which has been tricky for me) |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-17-prescientfuzz-ini --fuzzers prescientfuzz |
The Experiment |
Hi @DonggeLiu , finally I have everything building and running; am I allowed to run say 5 instances to test different parameter setups? I'm thinking to add each setup as a different 'fuzzer' (in |
Also, I wanted to generate that report just for PrescientFuzz vs LibAFL (as the graphs are hard to read with so many fuzzers); I tried doing the following but got an error:
I've tried searching, but I'm a bit stumped as to how it's possible for this to happen; although I am not particular experienced with pip / python so maybe matplotlib is just not installed properly? |
Yep sure, this requires changing this value to 5.
Yep this is the simplest way. |
Hopefully running the following should get all 4 up together:
I'm guessing you might have to tweak something so that it doesn't merge with the other experiments and leave the graphs too messy? |
Yep, if you want to compare these 4 only (i.e., no other fuzzers in the report), please set this value to Do you still want to run 5 instances for each fuzzer/setup?
|
Thanks, @DanBlackwell! |
Ah, this doesn't need rerunning; I thought I'd point it out in case it's useful for you identifying the source of flakiness (if indeed it is flakiness). |
Yeah, so I want to generate a report manually; but merge in the data from other experiments. Downloading the |
I recall I had to manually concat multiple |
Good news is that I figured out how to generate the combined reports that I wanted now (I think I must have broken the CSV in some way before). |
Hi @DonggeLiu , could you run the following command for me:
I believe this will be the final setup and then I'll have all I need to write up the experimentation in full. |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-19-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff libafl_rand_scheduler |
I'm really sorry, I managed to make a typo there which broke what I was trying to test. @DonggeLiu any chance you could re-run it with?
|
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-07-03-prescientfuzz --fuzzers prescientfuzz_direct_neighbour_fast prescientfuzz_direct_neighbour_fast_w_backoff |
Hi @DonggeLiu , could you please run the following for me?
|
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-07-30-prescientfuzz --fuzzers prescientfuzz |
Experiment |
Hi @DonggeLiu / @jonathanmetzman , I saw this comment that we may be able to start stuff ourselves: #2002 (comment), Is that just for the base fuzzer maintainers? I would like to run the following as I've just found and fixed a pretty insidious bug:
|
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-01-prescientfuzz --fuzzers prescientfuzz |
Experiment |
Hi Dan, |
Hi @DonggeLiu , thanks for the info! Tbh, I think that this stuff is ready to merge, once I figure out what is going on with libjpeg (#2010); that may be on my end, but I can't seem to reproduce the old results even when using the exact commits that ran well in the past :/ Also, any chance you can see what is up with the |
Hi @DanBlackwell, instead of merging this PR, I can assist you merge another tiny (or even no-op) PR so that you can run CIs automatically. We can close this PR if the experiments are done, and you can always reopen and modify the code. |
@DonggeLiu , it is up to you. This is similar to AFLFast vs AFL in that all I've done is add a new scheduler to libafl. It's got a bit of a bump over libafl (see here), so my plan was to make this setup available to test against (i.e. merge this PR), but maybe that's just polluting the If you think it's better to keep this separate, we can close this PR now. With respect to auto-running the CI, is that just the CI that does the build checks? Or does it also run some experiments too? |
Thanks for the explanation and being considerate, @DanBlackwell. PR experiments reduce this maintenance burden so that we can close PRs once all experiments are done. |
Hi, I have a new fuzzer based on LibAFL that I would like to integrate. I'd like to be able to run an experiment to compare it with the other fuzzers, but the documented approach (adding to https://github.com/google/fuzzbench/blob/master/service/experiment-requests.yaml) doesn't seem to be used lately - is there some automatic experiment that runs periodically?