Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/alg_202508.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ in [modeling_llama.py](https://github.com/huggingface/transformers/blob/main/src
to stabilize accuracy during evaluation. All other settings follow the default configurations of AutoRound and lm-eval.

| Qwen3-8B W2G64 | Avg. | arc_challenge | hellaswag | gsm8k | lambada_openai | mmlu | mmlupro | truthfulqa_mc1 | winogrande |
|-------------------|--------|---------------|-----------|--------|----------------|--------|---------|----------------|------------|
| AutoRound | 0.4373 | 0.4019 | 0.4437 | 0.4215 | 0.4826 | 0.5474 | 0.263 | 0.3072 | 0.6314 |
|:-------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| AutoRound | 0.4373 | 0.4019 | 0.4437 | 0.4215 | 0.4826 | 0.5474 | 0.2630 | 0.3072 | 0.6314 |
| AutoRound+alg_ext | 0.4787 | 0.4275 | 0.4516 | 0.5944 | 0.5181 | 0.5773 | 0.2807 | 0.3305 | 0.6496 |

| Llama3.1-8B W2G64 | Avg. | arc_challenge | hellaswag | gsm8k | lambada_openai | mmlu | mmlupro | truthfulqa_mc1 | winogrande |
|-------------------|--------|---------------|-----------|--------|----------------|--------|---------|----------------|------------|
| AutoRound | 0.382 | 0.3635 | 0.4562 | 0.1622 | 0.5069 | 0.4411 | 0.1661 | 0.3207 | 0.6393 |
| AutoRound+alg_ext | 0.4166 | 0.3712 | 0.4729 | 0.2039 | 0.5946 | 0.4981 | 0.2163 | 0.3011 | 0.6748 |
|:-------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| AutoRound | 0.3820 | 0.3635 | 0.4562 | 0.1622 | 0.5069 | 0.4411 | 0.1661 | 0.3207 | 0.6393 |
| AutoRound+alg_ext | 0.4166 | 0.3712 | 0.4729 | 0.2039 | 0.5946 | 0.4981 | 0.2163 | 0.3011 | 0.6748 |
10 changes: 5 additions & 5 deletions docs/auto_scheme_acc.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ For mxfp experiment, we use fake model while for weight only model we use real m
### Table 1 MXFP4/8 mixed accuracy.

| Average bits | Llama3.1-8B-I | Qwen2.5-7B-I | Qwen3-8B | Qwen3-32B |
|------------------|----------------|----------------|----------------|----------------|
|:------------------|:----------------:|:----------------:|:----------------:|:----------------:|
| **BF16** | 0.7076 (100%) | 0.7075 (100%) | 0.6764 (100%) | 0.7321 (100%) |
| **Pure 4-bit** | 0.6626 (93.6%) | 0.6550 (92.6%) | 0.6316 (93.4%) | 0.6901 (94.3%) |
| **Ours 4.5-bit** | 0.6808 (96.2%) | 0.6776 (95.8%) | 0.6550 (96.8%) | 0.7176 (98.0%) |
Expand All @@ -27,15 +27,15 @@ performance advantages.
### Table 2 Comparison with other recipes at an average of 5 bits of mxfp datatype

| Avg. bits = 5 | Llama3.1-8B-I | Qwen2.5-7B-I | Qwen3-8B |
|-----------------------|-------------------:|-------------------:|-------------------:|
|:------------------|:----------------:|:----------------:|:----------------:|
| **Tail layers 8-bit** | 0.6671 (94.3%) | 0.6616 (93.5%) | 0.6410 (94.8%) |
| **Head layers 8-bit** | 0.6657 (94.1%) | 0.6686 (94.5%) | 0.6356 (94.0%) |
| **Ours** | **0.6857 (96.9%)** | **0.6823 (96.4%)** | **0.6594 (97.5%)** |

### Table 3 Comparison with other recipes at an average of 4.5 bits of mxfp datatype

| Avg. bits = 4.5 | Llama3.1-8B-I | Qwen2.5-7B-I | Qwen3-8B |
|-----------------------|-------------------:|-------------------:|-------------------:|
|:------------------|:----------------:|:----------------:|:----------------:|
| **Tail layers 8-bit** | 0.6614 (93.5%) | 0.6535 (92.4%) | 0.6373 (94.2%) |
| **Head layers 8-bit** | 0.6568 (92.8%) | 0.6642 (93.9%) | 0.6305 (93.2%) |
| **Ours** | **0.6808 (96.2%)** | **0.6776 (95.5%)** | **0.6550 (95.8%)** |
Expand All @@ -44,7 +44,7 @@ performance advantages.
### Table4 Comparison with other recipes at an average of 3 bits of W2G128 and W4G128

| Avg. bits = 4.5 | Llama3.1-8B-I | Qwen2.5-7B-I | Qwen3-8B |
|-----------------------|--------------:|-------------:|---------:|
|:------------------|:----------------:|:----------------:|:----------------:|
| **Tail layers 4-bit** | 0.6058 | 0.3798 | 0.4536 |
| **Head layers 4-bit** | 0.3198 | 0.3270 | 0.3196 |
| **Ours** | 0.6148 | 0.4058 | 0.4862 |
| **Ours** | 0.6148 | 0.4058 | 0.4862 |
16 changes: 8 additions & 8 deletions docs/mxnv_acc.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@ Average accuracy of hellaswag,lambada_openai,mmlu,piqa,winogrande.
We evaluated using a fake model since we currently have no access to devices for running the real models. However, we have verified that in most cases the fake model closely matches the real model.

| mxfp4 g32 | llama3.1-8B-Instruct | Qwen2-7.5-Instruct | Phi4 | Qwen3-32B |
|-------------------|----------------------|--------------------|---------|-----------|
| RTN | 0.62124 | 0.65502 | 0.71674 | 0.69006 |
| AutoRound | 0.66862 | 0.67588 | 0.72472 | 0.72106 |
| AutoRound+alg_ext | 0.6732 | 0.68094 | 0.72252 | 0.72012 |
|:-------------------|:----------------------:|:--------------------:|:---------:|:-----------:|
| RTN | 0.6212 | 0.6550 | 0.7167 | 0.6901 |
| AutoRound | 0.6686 | 0.6758 | 0.7247 | 0.7211 |
| AutoRound+alg_ext | 0.6732 | 0.6809 | 0.7225 | 0.7201 |

| nvfp4 g16 | llama3.1-8B-Instruct | Qwen2-7.5-Instruct | Phi4 | Qwen3-32B |
|-------------------|----------------------|--------------------|---------|-----------|
| RTN | 0.68756 | 0.6906 | 0.72962 | 0.71636 |
| AutoRound | 0.69184 | 0.69728 | 0.73058 | 0.73062 |
| AutoRound+alg_ext | 0.69648 | 0.6989 | 0.7318 | 0.72948 |
|:-------------------|:----------------------:|:--------------------:|:---------:|:-----------:|
| RTN | 0.6876 | 0.6906 | 0.7296 | 0.7164 |
| AutoRound | 0.6918 | 0.6973 | 0.7306 | 0.7306 |
| AutoRound+alg_ext | 0.6965 | 0.6989 | 0.7318 | 0.7295 |