-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kernel] Expand FP8 support to Ampere GPUs using FP8 Marlin #5975
Conversation
This is an awesome feature! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. Thanks!
@mgoin awesome feature! I suppose that the perf benchmark was run with cuda graph enabled? Out of curiosity, did you run it without cuda graph? As this kernel has been integrated in TGI as well, it appears having CUDA graph enabled is rather critical so as to get speedups in the decoding (which I don't really explain to myself - but haven't profiled). In the prefill, as cuda graphs are never used for long enough seqlens, I do get a slight slowdown. I did not benchmark on vllm, but I suppose the trend is similar. Probably depends on the gpu/tp config/model as well. |
Glad you're enjoying it @fxmarty. Thanks for sharing your analysis. My end-to-end benchmarks were all done with cuda graphs enabled as this is the default in vLLM. Note that it is expected to see a slight slow-down at prefill (M>256), we trade this off to see the improvements at decode. I'm curious, have you seen the same difference for marlin int8 or int4? Aside from this, I think there could be additional tuning for A100 problem shapes. |
Hello, I noticed that you used the dequant_8bit function to dequantize FP8 data to FP16 data, but I'm not clear on the underlying principle. Could you please also provide the code for quantizing FP16 to FP8? Thanks. |
…ject#5975) Signed-off-by: Alvant <alvasian@yandex.ru>
vllm/csrc/quantization/fp8/fp8_marlin.cu Line 164 in 2b0879b
|
This work expands FP8 support in vLLM from GPUs with hardware FP8 support (Hopper and Ada Lovelace) to GPUs without native support (currently Ampere) by introducing FP8 Marlin - a fast fused dequantization kernel for FP8 to BF16/FP16 conversion.
Key features:
quantization="fp8"
at runtime or use pre-quantized FP8 checkpointsImplementation details:
End-to-end performance and accuracy results:
Individual layer sweeps:
As shown in the graphs, FP8 Marlin can provide significant speedups with minimal accuracy impact. Performance gains are higher on GPUs with less memory bandwidth (A10, RTX 3090) and for larger models.
Notes:
Testing:
This enhancement enables more users to benefit from FP8 quantization without hardware restrictions, improving vLLM's performance and efficiency across a broader range of setups!