-
Notifications
You must be signed in to change notification settings - Fork 534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submission checker version 4.0 #1560
Submission checker version 4.0 #1560
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
@@ -1309,6 +1509,7 @@ def __init__( | |||
self.seeds = self.base["seeds"] | |||
self.test05_seeds = self.base["test05_seeds"] | |||
self.accuracy_target = self.base["accuracy-target"] | |||
self.accuracy_upper_limit = self.base["accuracy-upper-limit"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should use self.base.get("accuracy-upper-limit") as it is not there for all the benchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done here: https://github.com/pgmpablo157321/inference/blob/f7c755a0e156f51c1684f984889b309b8afaffc2/tools/submission/submission_checker.py#L1618
The script passes the dictionary to the config and calls get for each model
@pgmpablo157321 can you also change the loadgen version similar to: |
"3d-unet-99.9", | ||
"gptj-99", | ||
"gptj-99.9", | ||
"llama-v2-70b-99", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the model name should be llama2-70b
according to the official website: https://huggingface.co/docs/transformers/main/model_doc/llama2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for all occurrences
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied it from here: https://github.com/pgmpablo157321/inference/blob/f7c755a0e156f51c1684f984889b309b8afaffc2/language/llama2-70b/mlperf.conf#L59C1-L59C1
Should we change all the references in that folder as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should change all to llama2. The llama-v2
name will cause confusion.
f7c755a
to
14efc69
Compare
03a7a0f
to
9d3d964
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Pablo for the changes
Thank you Pablo! Can we merge this ASAP so logs can be collected 🙏 |
This PR contains the necessary changes for a v4.0 submissions with the exception of some pending changes for Llama2