Skip to content

[Fix] Fix typo in resolve_hf_chat_template #18259

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 16, 2025

Conversation

fxmarty-amd
Copy link
Contributor

This makes lm_eval --model vllm fail.

Signed-off-by: Felix Marty <felmarty@amd.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the frontend label May 16, 2025
@fxmarty-amd
Copy link
Contributor Author

fxmarty-amd commented May 16, 2025

#18259 (comment) also results in [rank0]: TypeError: resolve_hf_chat_template() missing 1 required keyword-only argument: 'model_config' in lm-eval-harness due to the new keyword arg.

https://github.com/EleutherAI/lm-evaluation-harness/blob/0126f6d15e6d5f9b93f244b8ece5a44bdbac9c2c/lm_eval/models/vllm_causallms.py#L143

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for identifying, this bug was introduced in #18098

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label May 16, 2025
@mgoin mgoin added this to the v0.9.0 milestone May 16, 2025
@mgoin mgoin added the bug Something isn't working label May 16, 2025
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) May 16, 2025 12:52
@fxmarty-amd
Copy link
Contributor Author

@mgoin Thanks! I'll try to submit a fix in lm-eval-harness for the missing model_config.

I also noticed that sglang in lm-eval-harness does not use this resolve_hf_chat_template. Not sure if prompts are different when evaluation with vllm vs sglang then.

Compare:
https://github.com/EleutherAI/lm-evaluation-harness/blob/0126f6d15e6d5f9b93f244b8ece5a44bdbac9c2c/lm_eval/models/vllm_causallms.py#L214
and
https://github.com/EleutherAI/lm-evaluation-harness/blob/0126f6d15e6d5f9b93f244b8ece5a44bdbac9c2c/lm_eval/models/sglang_causallms.py#L408-L413

@mgoin
Copy link
Member

mgoin commented May 16, 2025

Yes @fxmarty-amd this is intentional for the vllm backend.

@anmarques introduced the resolve_hf_chat_template into vllm to allow us to do text evals on models that contain the chat template within chat_template.json instead of tokenizer_config.json. This is basically copied and pasted the way we handle it in vllm

@DarkLight1337 DarkLight1337 merged commit a5f8c11 into vllm-project:main May 16, 2025
74 checks passed
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
Signed-off-by: Felix Marty <felmarty@amd.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
Signed-off-by: Felix Marty <felmarty@amd.com>
Signed-off-by: minpeter <kali2005611@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants