Skip to content

int8wo Embedding Quant #1167

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 25, 2024
Merged

int8wo Embedding Quant #1167

merged 1 commit into from
Oct 25, 2024

Conversation

HDCharles
Copy link
Contributor

@HDCharles HDCharles commented Oct 25, 2024

Summary: Added int8 embedding quant to torchAO, speeds up inference on
our llama benchmark from 107.8 -> 108.5 tok/s on A100

expected api is

quantize_(model, int8_weight_only(group_size=64), filter_fn=lambda x,
*args: isinstance(x, torch.nn.Embedding))

Test Plan:

python generate.py --checkpoint_path $CHECKPOINT_PATH/$MODEL_REPO/model.pth --quantization embed-int8wo --compile
python generate.py --checkpoint_path $CHECKPOINT_PATH/$MODEL_REPO/model.pth --compile
python test_integration.py -k
"test_weight_only_groupwise_embedding_quant"

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Oct 25, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1167

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e251e67 with merge base 4b563f2 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 25, 2024
@HDCharles HDCharles changed the title [draft] Embedding Quant int8wo Embedding Quant Oct 25, 2024
idx = args[0]
int_data, scale, zero_point = args[1].tensor_impl.get_plain()
assert kwargs["padding_idx"] is None and kwargs["max_norm"] is None and not kwargs["scale_grad_by_freq"] and not kwargs["sparse"] and kwargs["norm_type"]==2.0
sliced_data, sliced_scale, sliced_zero_point = int_data[idx], scale[idx], zero_point[idx]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any restrictions on idx for this to be valid?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not as far as our tests show

Summary: Added int8 embedding quant to torchAO, speeds up inference on
our llama benchmark from 107.8 -> 108.5 tok/s on A100

expected api is

quantize_(model, int8_weight_only(group_size=64), filter_fn=lambda x,
*args: isinstance(x, torch.nn.Embedding))

Test Plan:

python generate.py --checkpoint_path $CHECKPOINT_PATH/$MODEL_REPO/model.pth --quantization embed-int8wo --compile
python generate.py --checkpoint_path $CHECKPOINT_PATH/$MODEL_REPO/model.pth --compile
python test_integration.py -k
"test_weight_only_groupwise_embedding_quant"

Reviewers:

Subscribers:

Tasks:

Tags:
@HDCharles HDCharles merged commit e85c1a3 into main Oct 25, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants