Your current environment
The output of `python collect_env.py`
Your output of `python collect_env.py` here
🐛 Describe the bug
I'm using v0.7.3 with QwQ 32B.
It looks like vllm uses stop_sequences to truncate both reasoning_content and content.
If vllm truncates reasoning_content, there will no content returned.
It is not expected, and stop_sequences shall be only applied to content.
Before submitting a new issue...