Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Following pytorch/pytorch#153019 requests, we enable awq-uint4 for Intel GPU in pytorch/ao after RTN ready. Currently, the implementation of awq-uint4 uses the ZeroPointDomain.FLOAT and ZeroPointDomain.INT.
How to run awq quantization model:
#Results of meta-llama/Llama-3.1-8B-Instruct for int domain on Intel GPU:
{'perplexity': {'perplexity': 10.099576950073242, 'prediction_time': 0.20489671968780787}}
#Results of meta-llama/Llama-3.1-8B-Instruct for float domain on Intel GPU:
{'perplexity': {'perplexity': 10.166366577148438, 'prediction_time': 0.12182355982012454}}
#Results of meta-llama/Llama-3.1-8B-Instruct for float domain on NVIDIA-A100 GPU:
Results: {'perplexity': {'perplexity': 10.160041809082031, 'prediction_time': 0.4466673863672577}}