Skip to content

Fix noisy warning for uncalibrated q_scale/p_scale #17414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 8, 2025

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Apr 29, 2025

When loading any quantized model that supports quantized kv cache, you see this warning even if quantized kv cache isn't enabled (introduced by #15734)

WARNING 04-29 20:32:22 [kv_cache.py:128] Using Q scale 1.0 and prob scale 1.0 with fp8 attention. This may cause accuracy issues. Please make sure Q/prob scaling factors are available in the fp8 checkpoint.

This change moves the warning to only happen when the kv cache is quantized. We could further restrict this to ROCm platforms since that is the only consumer of this atm

Signed-off-by: mgoin <mgoin64@gmail.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

f"Using Q scale {q_scale} and prob scale {prob_scale} "
"with fp8 attention. This may cause accuracy issues. "
"Please make sure Q/prob scaling factors are "
f"Using uncalibrated q_scale {q_scale} and/or prob_scale "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why "uncalibrated"? I only see the warning once when I run it, doesn't seem to be very noisy. Depending on which lands first (#17331) can check the VLLM_ROCM_USE_FP8_SCALES flag too, since the warning won't be necessary if VLLM_ROCM_USE_FP8_SCALES=0.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I even see this when running INT4 models, this is triggered for most quantization methods

INFO 05-07 03:01:24 [gpu_model_runner.py:1360] Starting to load model RedHatAI/Qwen3-30B-A3B-quantized.w4a16...

INFO 05-07 03:01:34 [loader.py:459] Loading weights took 9.47 seconds
WARNING 05-07 03:01:34 [kv_cache.py:128] Using Q scale 1.0 and prob scale 1.0 with fp8 attention. This may cause accuracy issues. Please make sure Q/prob scaling factors are available in the fp8 checkpoint.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scales are uncalibrated because they are 1.0

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label May 7, 2025
Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this has been annoying me too

@tlrmchlsmth tlrmchlsmth merged commit 4f605a6 into vllm-project:main May 8, 2025
69 checks passed
princepride pushed a commit to princepride/vllm that referenced this pull request May 10, 2025
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
mawong-amd pushed a commit to ROCm/vllm that referenced this pull request May 14, 2025
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: minpeter <kali2005611@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants