Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
[MISC] Remove FP8 warning (vllm-project#5472)
Browse files Browse the repository at this point in the history
Co-authored-by: Philipp Moritz <pcmoritz@gmail.com>
  • Loading branch information
2 people authored and robertgshaw2-neuralmagic committed Jun 16, 2024
1 parent 90c237d commit 2752570
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion vllm/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ def _verify_quantization(self) -> None:
f"{self.quantization} quantization is currently not "
f"supported in ROCm.")
if (self.quantization
not in ["marlin", "gptq_marlin_24", "gptq_marlin"]):
not in ("fp8", "marlin", "gptq_marlin_24", "gptq_marlin")):
logger.warning(
"%s quantization is not fully "
"optimized yet. The speed can be slower than "
Expand Down

0 comments on commit 2752570

Please sign in to comment.