Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix][Harmless] Fix hardcoded float16 dtype for model_is_embedding #7566

Merged
merged 1 commit into from
Aug 16, 2024

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Aug 15, 2024

FIX #7561

On main you would see a concerning but harmless message about casting to float16 for models that are bfloat16:

vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct
...
WARNING 08-15 21:06:16 config.py:1514] Casting torch.bfloat16 to torch.float16.
INFO 08-15 21:06:16 api_server.py:111] Multiprocessing frontend to use ipc:///tmp/5c13e2e7-a88a-4ffa-bb14-492eaa9851f8 for RPC Path.
INFO 08-15 21:06:16 api_server.py:122] Started engine process with PID 265910

This PR removes that by making the ModelConfig made in model_is_embedding just use dtype="auto":

vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct
...
INFO 08-15 21:05:58 api_server.py:111] Multiprocessing frontend to use ipc:///tmp/79226d04-56b8-4839-a94e-8da715e35d1c for RPC Path.
INFO 08-15 21:05:58 api_server.py:122] Started engine process with PID 265174

This is just a cosmetic issue since this config isn't actually used for loading the model, just to check if the model is for embedding

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 15, 2024
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the QoL improvement!

@youkaichao youkaichao merged commit 9c8e2d1 into main Aug 16, 2024
61 of 63 checks passed
@youkaichao youkaichao deleted the fix-model-is-embedding-dtype branch August 16, 2024 01:26
fialhocoelho pushed a commit to opendatahub-io/vllm that referenced this pull request Aug 22, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Llama3.1 casting torch.bfloat16 to torch.float16
3 participants