-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract eval code from GPTQ for more general usage #275
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/275
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3410422 with merge base 7511b1d (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -186,9 +186,14 @@ def test_8da4w_quantizer(self): | |||
assert isinstance(m.linear2, Int8DynActInt4WeightLinear) | |||
m(*example_inputs) | |||
|
|||
# TODO: save model weights as artifacts and re-enable in CI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Until this is running in CI mind if we have an eval.py
we can run in a new scripts/
folder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, will do in the next PR. Currently the models still live under test/ so we'll probably have to move those out as well
torchao/quantization/_eval.py
Outdated
@@ -0,0 +1,228 @@ | |||
# Copyright (c) Meta Platforms, Inc. and affiliates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we put this in a higher level namespace? Something like torchao._eval? or torchao.model._eval or torchao.util
"Int4WeightOnlyGPTQQuantizer", | ||
"Int4WeightOnlyQuantizer", | ||
] + add_ons | ||
|
||
if lm_eval_available: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome <3 thank you for making this change
Summary: This commit extracts all the eval code from GPTQ.py. This is the first step towards having a general eval framework in torchao. The eventual goal is to use lm_eval to produce reproducible benchmarks for the quantization APIs in torchao that we can showcase on the main README. This will have the added benefit of allowing us to add (possibly nightly) regression test suites for important models. Test Plan: ``` 2024-05-24:14:50:32,647 INFO [task.py:395] Building contexts for wikitext on rank 0... 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1132.98it/s] 2024-05-24:14:50:32,648 INFO [evaluator.py:362] Running loglikelihood_rolling requests 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.39s/it] wikitext: {'word_perplexity,none': 7.877762491958485, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 1.488984329919892, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 0.5743285710685551, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'} . ---------------------------------------------------------------------- Ran 1 test in 858.105s OK ``` python test/quantization/test_quant_api.py -k test_8da4w_gptq_quantizer
Summary: Fix broken import after pytorch/ao#275 Reviewed By: jerryzh168 Differential Revision: D57888168
Summary: Fix broken import after pytorch/ao#275 Differential Revision: D57888168
|
||
import torch | ||
|
||
from .utils import _lm_eval_available, _MultiInput |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two are in a different directory. i.e. the package name should be .quantization.utils
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for catching that, will submit a fix
Summary: Fix broken import after pytorch/ao#275 Reviewed By: jerryzh168 Differential Revision: D57888168
Summary: Fix broken import after pytorch/ao#275 Reviewed By: jerryzh168 Differential Revision: D57888168
Summary: Pull Request resolved: #3760 Fix broken import after pytorch/ao#275 Reviewed By: jerryzh168 Differential Revision: D57888168 fbshipit-source-id: 51a63131ae14e362991ef962df325ec24f958e2d
Summary: This commit extracts all the eval code from GPTQ.py. This is the first step towards having a general eval framework in torchao. The eventual goal is to use lm_eval to produce reproducible benchmarks for the quantization APIs in torchao that we can showcase on the main README. This will have the added benefit of allowing us to add (possibly nightly) regression test suites for important models. Test Plan: ``` 2024-05-24:14:50:32,647 INFO [task.py:395] Building contexts for wikitext on rank 0... 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1132.98it/s] 2024-05-24:14:50:32,648 INFO [evaluator.py:362] Running loglikelihood_rolling requests 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.39s/it] wikitext: {'word_perplexity,none': 7.877762491958485, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 1.488984329919892, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 0.5743285710685551, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'} . ---------------------------------------------------------------------- Ran 1 test in 858.105s OK ``` python test/quantization/test_quant_api.py -k test_8da4w_gptq_quantizer
Summary: This commit extracts all the eval code from GPTQ.py. This is the first step towards having a general eval framework in torchao. The eventual goal is to use lm_eval to produce reproducible benchmarks for the quantization APIs in torchao that we can showcase on the main README. This will have the added benefit of allowing us to add (possibly nightly) regression test suites for important models.
Test Plan:
python test/quantization/test_quant_api.py -k test_8da4w_gptq_quantizer