Skip to content

[Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser #17917

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
May 22, 2025

Conversation

wukaixingxp
Copy link
Contributor

@wukaixingxp wukaixingxp commented May 9, 2025

Change the llama4 pythonic template and small fix on the edge case where llama4 model may output <|python_start|> unexpectedly.
BFCL test result:

Name reported Base_vllm Pythonic_vllm
Model Llama-4-Scout-17B-16E-Instruct (FC) Llama-4-Scout-17B-16E-Instruct (FC) Llama-4-Scout-17B-16E-Instruct (FC)
Overall Acc 45.41% 47.78% 55.97%
Non-Live AST Acc 83.48% 83.67% 80.04%
Non-Live Simple AST 79.42% 79.17% 78.67%
Non-Live Multiple AST 95% 94.00% 92.00%
Non-Live Parallel AST 81.5% 80.50% 77.00%
Non-Live Parallel Multiple AST 78% 81.00% 72.50%
Live Acc 57.97% 58.69% 74.06%
Live Simple AST 77.91% 82.17% 80.62%
Live Multiple AST 74.36% 74.07% 73.22%
Live Parallel AST 68.75% 75.00% 75.00%
Live Parallel Multiple AST 62.5% 70.83% 66.67%
Multi Turn Acc 1.88% 6.38% 13.00%
Multi Turn Base 2% 8.50% 15.50%
Multi Turn Miss Func 2.5% 5.00% 14.00%
Multi Turn Miss Param 1% 4.00% 11.00%
Multi Turn Long Context 2% 8.00% 11.50%
Relevance Detection 100% 94.44% 77.78%
Irrelevance Detection 39.66% 44.38% 78.70%

NOTE: Since BFCL has a default tool-call system prompt, we need to manually modified pythonic and json system prompt from here
For jinja template please see this example:
Given this test data:

{
    "bos_token": "<|begin_of_text|>",
    "add_generation_prompt": true,
 "custom_tools": [
    {
        "name": "get_weather",
        "description": "Get weather info for places",
        "parameters": {
            "type": "dict",
            "required": ["city"],
            "properties": {
                "city": {
                    "type": "string",
                    "description": "The name of the city to get the weather for"
                },
                "metric": {
                    "type": "string",
                    "description": "The metric for weather. Options are: celsius, fahrenheit",
                    "default": "celsius"
                }
            }
        }
    }
],
    "messages": [
#{"role": "system", "content": "you are helpful assistant"},

{"role": "user", "content": "Who are you?"},
{"role": "assistant", "content": "I am a Llama 4 model",tool_calls:[]},
      {"role": "user", "content": "What is the weather in SF and Seattle?"},
{"role": "assistant", "content": "[get_weather(city=\"San Francisco\"), get_weather(city=\"Seattle\")]",tool_calls:[]},
{
    "role": "ipython",
    "content": "[        {            \"response\": \"Sunny 75\"        },        {            \"response\": \"Rainy 65\"        },    ]"
},{"role": "assistant", "content": "SF is Sunny and Seattle is Rainy"},
{"role": "user", "content": "What is the weather in NYC and SF"},
{"role": "assistant", "content": "",'tool_calls': [
            {
                'name': 'get_weather',
                'arguments': {
                    'city': 'NYC',
                   'metric': 'fahrenheit',
                }
            },
{
                'name': 'get_weather',
                'arguments': {
                    'city': 'SF',
                   'metric': 'fahrenheit',
                }
            },
        ],},
    ]
  }

The jinja template will render the output like this:

<|begin_of_text|><|header_start|>system<|header_end|>

You are a helpful assistant and an expert in function composition. You can answer general questions using your internal knowledge OR invoke functions when necessary. Follow these strict guidelines:

1. FUNCTION CALLS:
- ONLY use functions that are EXPLICITLY listed in the function list below
- If NO functions are listed (empty function list []), respond ONLY with internal knowledge or "I don't have access to [Unavailable service] information"
- If a function is not in the list, respond ONLY with internal knowledge or "I don't have access to [Unavailable service] information"
- If ALL required parameters are present AND the query EXACTLY matches a listed function's purpose: output ONLY the function call(s)
- Use exact format: [func_name1(param1=value1, param2=value2), func_name2(...)]
Examples:
CORRECT: [get_weather(location="Vancouver"), calculate_route(start="Boston", end="New York")] <- Only if get_weather and calculate_route are in function list
INCORRECT: get_weather(location="New York")
INCORRECT: Let me check the weather: [get_weather(location="New York")]
INCORRECT: [get_events(location="Singapore")] <- If function not in list

2. RESPONSE RULES:
- For pure function requests matching a listed function: ONLY output the function call(s)
- For knowledge questions: ONLY output text
- For missing parameters: ONLY request the specific missing parameters
- For unavailable services (not in function list): output ONLY with internal knowledge or "I don't have access to [Unavailable service] information". Do NOT execute a function call.
- If the query asks for information beyond what a listed function provides: output ONLY with internal knowledge about your limitations
- NEVER combine text and function calls in the same response
- NEVER suggest alternative functions when the requested service is unavailable
- NEVER create or invent new functions not listed below

3. STRICT BOUNDARIES:
- ONLY use functions from the list below - no exceptions
- NEVER use a function as an alternative to unavailable information
- NEVER call functions not present in the function list
- NEVER add explanatory text to function calls
- NEVER respond with empty brackets
- Use proper Python/JSON syntax for function calls
- Check the function list carefully before responding

4. TOOL RESPONSE HANDLING:
- When receiving tool responses: provide concise, natural language responses
- Don't repeat tool response verbatim
- Don't add supplementary information

Here is a list of functions in JSON format that you can invoke:
[
    {
        "description": "Get weather info for places",
        "name": "get_weather",
        "parameters": {
            "properties": {
                "city": {
                    "description": "The name of the city to get the weather for",
                    "type": "string"
                },
                "metric": {
                    "default": "celsius",
                    "description": "The metric for weather. Options are: celsius, fahrenheit",
                    "type": "string"
                }
            },
            "required": [
                "city"
            ],
            "type": "dict"
        }
    }
]
<|eot|><|header_start|>user<|header_end|>

Who are you?<|eot|><|header_start|>assistant<|header_end|>

I am a Llama 4 model<|eot|><|header_start|>user<|header_end|>

What is the weather in SF and Seattle?<|eot|><|header_start|>assistant<|header_end|>

[get_weather(city="San Francisco"), get_weather(city="Seattle")]<|eot|><|header_start|>ipython<|header_end|>

"[        {            \"response\": \"Sunny 75\"        },        {            \"response\": \"Rainy 65\"        },    ]"<|eot|><|header_start|>assistant<|header_end|>

SF is Sunny and Seattle is Rainy<|eot|><|header_start|>user<|header_end|>

What is the weather in NYC and SF<|eot|><|header_start|>assistant<|header_end|>

[get_weather(city="NYC", metric="fahrenheit"), get_weather(city="SF", metric="fahrenheit")]<|eot|><|header_start|>assistant<|header_end|>

Copy link

github-actions bot commented May 9, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation frontend tool-calling labels May 9, 2025
@houseroad houseroad requested a review from yeqcharlotte May 9, 2025 18:55
Copy link
Collaborator

@yeqcharlotte yeqcharlotte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great thank you! please fix the long lines.

and do could you get the results from llama-stack eval?
please also double check unit tests on vllm side pytest -s -vv tests/tool_use --models llama4 --extended

bbrowning added a commit to bbrowning/llama-stack that referenced this pull request May 14, 2025
This fixes an issue in how we used the tool_call_buf from streaming
tool calls in the remote-vllm provider where it would end up
concatenating parameters from multiple different tool call results
instead of aggregating the results from each tool call separately.

It also fixes an issue found while digging into that where we were
accidentally mixing the json string form of tool call parameters with
the string representation of the python form, which mean we'd end up
with single quotes in what should be double-quoted json strings.

The following tests are now passing 100% for the remote-vllm provider,
where some of the test_text_inference were failing before this change:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_text_inference.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_vision_inference.py --vision-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

```

Many of the agent tests are passing, although some are failing due to
bugs in vLLM's pythonic tool parser for Llama models. See the PR at
vllm-project/vllm#17917 and a gist at
https://gist.github.com/bbrowning/b5007709015cb2aabd85e0bd08e6d60f for
changes needed there, which will have to get made upstream in vLLM.

Agent tests:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
````

Signed-off-by: Ben Browning <bbrownin@redhat.com>
bbrowning added a commit to bbrowning/llama-stack that referenced this pull request May 14, 2025
This fixes an issue in how we used the tool_call_buf from streaming
tool calls in the remote-vllm provider where it would end up
concatenating parameters from multiple different tool call results
instead of aggregating the results from each tool call separately.

It also fixes an issue found while digging into that where we were
accidentally mixing the json string form of tool call parameters with
the string representation of the python form, which mean we'd end up
with single quotes in what should be double-quoted json strings.

The following tests are now passing 100% for the remote-vllm provider,
where some of the test_text_inference were failing before this change:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_text_inference.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_vision_inference.py --vision-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

```

Many of the agent tests are passing, although some are failing due to
bugs in vLLM's pythonic tool parser for Llama models. See the PR at
vllm-project/vllm#17917 and a gist at
https://gist.github.com/bbrowning/b5007709015cb2aabd85e0bd08e6d60f for
changes needed there, which will have to get made upstream in vLLM.

Agent tests:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
````

Signed-off-by: Ben Browning <bbrownin@redhat.com>
@yeqcharlotte
Copy link
Collaborator

thanks for sharing the new template that reaches BFCL parity! mind also update the test summary to make it readable?

bbrowning added a commit to bbrowning/llama-stack that referenced this pull request May 15, 2025
This fixes an issue in how we used the tool_call_buf from streaming
tool calls in the remote-vllm provider where it would end up
concatenating parameters from multiple different tool call results
instead of aggregating the results from each tool call separately.

It also fixes an issue found while digging into that where we were
accidentally mixing the json string form of tool call parameters with
the string representation of the python form, which mean we'd end up
with single quotes in what should be double-quoted json strings.

The following tests are now passing 100% for the remote-vllm provider,
where some of the test_text_inference were failing before this change:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_text_inference.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_vision_inference.py --vision-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

```

Many of the agent tests are passing, although some are failing due to
bugs in vLLM's pythonic tool parser for Llama models. See the PR at
vllm-project/vllm#17917 and a gist at
https://gist.github.com/bbrowning/b5007709015cb2aabd85e0bd08e6d60f for
changes needed there, which will have to get made upstream in vLLM.

Agent tests:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
````

Signed-off-by: Ben Browning <bbrownin@redhat.com>
@FWao
Copy link

FWao commented May 15, 2025

The new Jinja template behaves differently from the old one when a user provides a system prompt.

Problem:
In the new template, if the user sets a system message, the list of tools is added—but the instructions for using Python-style tool calls are missing. Because of this, the model makes JSON-style tool calls instead.

Expected (Old Template):
In the old template, tool instructions were always included, even if the user provided a custom system prompt. The model worked as expected.

Current (New Template):
Now, the user has to manually add tool call instructions to the system prompt. This was not needed before.

@wukaixingxp
Copy link
Contributor Author

The new Jinja template behaves differently from the old one when a user provides a system prompt.

Problem: In the new template, if the user sets a system message, the list of tools is added—but the instructions for using Python-style tool calls are missing. Because of this, the model makes JSON-style tool calls instead.

Expected (Old Template): In the old template, tool instructions were always included, even if the user provided a custom system prompt. The model worked as expected.

Current (New Template): Now, the user has to manually add tool call instructions to the system prompt. This was not needed before.

Thank you so much for this feedback.. I will write more test to make sure it works for all other cases..

raghotham pushed a commit to meta-llama/llama-stack that referenced this pull request May 15, 2025
# What does this PR do?

This fixes an issue in how we used the tool_call_buf from streaming tool
calls in the remote-vllm provider where it would end up concatenating
parameters from multiple different tool call results instead of
aggregating the results from each tool call separately.

It also fixes an issue found while digging into that where we were
accidentally mixing the json string form of tool call parameters with
the string representation of the python form, which mean we'd end up
with single quotes in what should be double-quoted json strings.

Closes #1120

## Test Plan

The following tests are now passing 100% for the remote-vllm provider,
where some of the test_text_inference were failing before this change:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_text_inference.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/inference/test_vision_inference.py --vision-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"

```

All but one of the agent tests are passing (including the multi-tool
one). See the PR at vllm-project/vllm#17917 and
a gist at
https://gist.github.com/bbrowning/4734240ce96b4264340caa9584e47c9e for
changes needed there, which will have to get made upstream in vLLM.

Agent tests:

```
VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic" LLAMA_STACK_CONFIG=remote-vllm python -m pytest -v tests/integration/agents/test_agents.py --text-model "RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic"
````

---------

Signed-off-by: Ben Browning <bbrownin@redhat.com>
@wukaixingxp
Copy link
Contributor Author

wukaixingxp commented May 15, 2025

Hi! Thanks for your feedback. The minimum tool-definition string and tool-output expectation will be appended to the user-provided system prompt now. For tool-call, the idea is that for basic users, we recommend not to set any system prompt so that our default comprehensive system prompt will be used, whereas for advanced users, who want to use their own customized system prompt, we will provide minimum tool-definition string and tool-output expectation at the end. Please check the following test examples:
No user-provided system prompt, use default
User-provided system prompt, append tool definition at the end
User-provided system prompt, but no-tool provided
No system prompt and no tool

The new Jinja template behaves differently from the old one when a user provides a system prompt.

Problem: In the new template, if the user sets a system message, the list of tools is added—but the instructions for using Python-style tool calls are missing. Because of this, the model makes JSON-style tool calls instead.

Expected (Old Template): In the old template, tool instructions were always included, even if the user provided a custom system prompt. The model worked as expected.

Current (New Template): Now, the user has to manually add tool call instructions to the system prompt. This was not needed before.

@bbrowning
Copy link
Contributor

Attemping to run these changes locally with the Berkeley function calling leaderboard and my own vLLM, it appears to only be using the completions endpoint (instead of chat completions) when testing the Llama 4 Scout model. For completeness, here's how I'm running bfcl against my local vLLM serving Llama 4 Scout:

bfcl generate --model meta-llama/Llama-4-Scout-17B-16E-Instruct-FC --skip-server-setup

The configuration for meta-llama/Llama-4-Scout-17B-16E-Instruct-FC in bfcl sends all requests to the completions endpoint (instead of chat/completions), so the jinja template and tool call parser are not used at all in that flow. Are there changes used to update bfcl to use chat/completions when testing Llama 4 Scout?

@wukaixingxp
Copy link
Contributor Author

wukaixingxp commented May 16, 2025

Attemping to run these changes locally with the Berkeley function calling leaderboard and my own vLLM, it appears to only be using the completions endpoint (instead of chat completions) when testing the Llama 4 Scout model. For completeness, here's how I'm running bfcl against my local vLLM serving Llama 4 Scout:

bfcl generate --model meta-llama/Llama-4-Scout-17B-16E-Instruct-FC --skip-server-setup

The configuration for meta-llama/Llama-4-Scout-17B-16E-Instruct-FC in bfcl sends all requests to the completions endpoint (instead of chat/completions), so the jinja template and tool call parser are not used at all in that flow. Are there changes used to update bfcl to use chat/completions when testing Llama 4 Scout?

Change this default system prompt to

You are a helpful assistant and an expert in function composition. You can answer general questions using your internal knowledge OR invoke functions when necessary. Follow these strict guidelines:

1. FUNCTION CALLS:
- ONLY use functions that are EXPLICITLY listed in the function list below
- If NO functions are listed (empty function list []), respond ONLY with internal knowledge or "I don't have access to [Unavailable service] information"
- If a function is not in the list, respond ONLY with internal knowledge or "I don't have access to [Unavailable service] information"
- If ALL required parameters are present AND the query EXACTLY matches a listed function's purpose: output ONLY the function call(s)
- Use exact format: [func_name1(param1=value1, param2=value2), func_name2(...)]
Examples:
CORRECT: [get_weather(location="Vancouver"), calculate_route(start="Boston", end="New York")] <- Only if get_weather and calculate_route are in function list
INCORRECT: get_weather(location="New York")
INCORRECT: Let me check the weather: [get_weather(location="New York")]
INCORRECT: [get_events(location="Singapore")] <- If function not in list

2. RESPONSE RULES:
- For pure function requests matching a listed function: ONLY output the function call(s)
- For knowledge questions: ONLY output text
- For missing parameters: ONLY request the specific missing parameters
- For unavailable services (not in function list): output ONLY with internal knowledge or "I don't have access to [Unavailable service] information". Do NOT execute a function call.
- If the query asks for information beyond what a listed function provides: output ONLY with internal knowledge about your limitations
- NEVER combine text and function calls in the same response
- NEVER suggest alternative functions when the requested service is unavailable
- NEVER create or invent new functions not listed below

3. STRICT BOUNDARIES:
- ONLY use functions from the list below - no exceptions
- NEVER use a function as an alternative to unavailable information
- NEVER call functions not present in the function list
- NEVER add explanatory text to function calls
- NEVER respond with empty brackets
- Use proper Python/JSON syntax for function calls
- Check the function list carefully before responding

4. TOOL RESPONSE HANDLING:
- When receiving tool responses: provide concise, natural language responses
- Don't repeat tool response verbatim
- Don't add supplementary information

@bbrowning
Copy link
Contributor

Ahh, I see. I was trying to run the function calling leaderboard via vLLM in a way that exercised the actual vLLM jinja template and tool parser. But, I see what you're doing is not doing that and instead just copying the same prompt into the bfcl code but still using the completions endpoint for testing. Your way is a reasonable way to test the prompt, although that won't end up actually exercising this tool call parser change at all I don't think?

@wukaixingxp
Copy link
Contributor Author

Yeah.. I think the function definition are prepared from BFCL to the system prompt to bypass our jinja template/ parser. I mentioned BFCL just to show this jinja template can give a better result. I will run pytest -s -vv tests/tool_use --models llama4 --extended and llama-stack-evals to test the jinja_template + parser in my PRs.

@wukaixingxp
Copy link
Contributor Author

wukaixingxp commented May 16, 2025

Ahh, I see. I was trying to run the function calling leaderboard via vLLM in a way that exercised the actual vLLM jinja template and tool parser. But, I see what you're doing is not doing that and instead just copying the same prompt into the bfcl code but still using the completions endpoint for testing. Your way is a reasonable way to test the prompt, although that won't end up actually exercising this tool call parser change at all I don't think?

BTW, llama-stack-evals also support BFCL now, I think it will just take a openai compatible server and rely on the jinja template/parser from vllm.

@wukaixingxp
Copy link
Contributor Author

wukaixingxp commented May 16, 2025

Test with llama-stack-eval

  1. create ./llama4_pythonic.jinja based on the jinja template
  2. install current commit vllm uv pip install -U vllm --extra-index-url https://wheels.vllm.ai/nightly and modified the site_package pythonic_tool_parser.py in /home/.conda/envs/vllm/lib/python3.10/sitepackages/vllm/entrypoints/openai/tool_parsers/pythonic_tool_parser.py
  3. Started the vllm server VLLM_DISABLE_COMPILE_CACHE=1 vllm serve meta-llama/Llama-4-Scout-17B-16E-Instruct -tp4 --seed 0 --max-model-len=100000 --host 0.0.0.0 --port 8001 --enable-auto-tool-choice --tool-call-parser llama4_pythonic --chat-template ./llama4_pythonic.jinja --limit-mm-per-prompt image=5 --max-num-seqs 20
  4. Install llama-stack evals: git clone https://github.com/fairinternal/llama-stack-evals.git; cd llama-stack-evals;pip install -e .
  5. run llama-stack-evals: llama-stack-evals run-tests --model meta-llama/Llama-4-Scout-17B-16E-Instruct --provider vllm
  6. results, only two error code test failed, all other test passed:
llama-stack-evals run-tests --model meta-llama/Llama-4-Scout-17B-16E-Instruct --provider vllm

=== Running tests for provider: vllm, model: meta-llama/Llama-4-Scout-17B-16E-Instruct ===
Running command: /home/kaiwu/.conda/envs/evals/bin/python3.12 -m pytest llama_stack_evals/functional_tests/openai_api/test_chat_completion.py --model=meta-llama/Llama-4-Scout-17B-16E-Instruct -v --provider=vllm
====================================== test session starts ======================================
platform linux -- Python 3.12.9, pytest-8.3.5, pluggy-1.6.0 -- /home/kaiwu/.conda/envs/evals/bin/python3.12
cachedir: .pytest_cache
metadata: {'Python': '3.12.9', 'Platform': 'Linux-6.4.3-0_fbk20_zion_2830_g3e5ab162667d-x86_64-with-glibc2.34', 'Packages': {'pytest': '8.3.5', 'pluggy': '1.6.0'}, 'Plugins': {'anyio': '4.9.0', 'metadata': '3.1.1', 'json-report': '1.5.0'}}
rootdir: /home/kaiwu/work/llama-stack-evals
configfile: pyproject.toml
plugins: anyio-4.9.0, metadata-3.1.1, json-report-1.5.0
collected 34 items                                                                              

llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_basic[meta-llama/Llama-4-Scout-17B-16E-Instruct-earth] PASSED [  2%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_basic[meta-llama/Llama-4-Scout-17B-16E-Instruct-saturn] PASSED [  5%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_basic[meta-llama/Llama-4-Scout-17B-16E-Instruct-earth] PASSED [  8%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_basic[meta-llama/Llama-4-Scout-17B-16E-Instruct-saturn] PASSED [ 11%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_image[meta-llama/Llama-4-Scout-17B-16E-Instruct-case0] PASSED [ 14%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_image[meta-llama/Llama-4-Scout-17B-16E-Instruct-case0] PASSED [ 17%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_structured_output[meta-llama/Llama-4-Scout-17B-16E-Instruct-extract] PASSED [ 20%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_structured_output[meta-llama/Llama-4-Scout-17B-16E-Instruct-extract] PASSED [ 23%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-basic] PASSED [ 26%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-user_provided_system_prompt] XFAIL [ 29%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-array_param] XFAIL [ 32%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-basic] PASSED [ 35%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-user_provided_system_prompt] XFAIL [ 38%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-array_param] XFAIL [ 41%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_tool_choice_required[meta-llama/Llama-4-Scout-17B-16E-Instruct-basic] XPASS [ 44%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_tool_choice_required[meta-llama/Llama-4-Scout-17B-16E-Instruct-basic] XPASS [ 47%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_tool_choice_none[meta-llama/Llama-4-Scout-17B-16E-Instruct-basic] XFAIL [ 50%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_tool_choice_none[meta-llama/Llama-4-Scout-17B-16E-Instruct-basic] XPASS [ 52%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_multi_turn_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-text_then_tool] PASSED [ 55%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_multi_turn_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tool_then_text] PASSED [ 58%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_multi_turn_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-text_then_tool] XPASS [ 61%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_multi_turn_tool_calling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tool_then_text] XPASS [ 64%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_multi_turn_multiple_images[meta-llama/Llama-4-Scout-17B-16E-Instruct-stream=False] PASSED [ 67%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_multi_turn_multiple_images[meta-llama/Llama-4-Scout-17B-16E-Instruct-stream=True] PASSED [ 70%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-messages_missing] PASSED [ 73%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-messages_role_invalid] PASSED [ 76%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tool_choice_invalid] PASSED [ 79%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tool_choice_no_tools] PASSED [ 82%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] FAILED [ 85%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-messages_missing] PASSED [ 88%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-messages_role_invalid] PASSED [ 91%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tool_choice_invalid] PASSED [ 94%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tool_choice_no_tools] PASSED [ 97%]
llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] FAILED [100%]

=========================================== FAILURES ============================================
_ test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] _

request = <FixtureRequest for <Function test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid]>>
openai_client = <openai.OpenAI object at 0x7fef7f10daf0>
model = 'meta-llama/Llama-4-Scout-17B-16E-Instruct', provider = 'vllm'
verification_config = {'cerebras': ProviderConfig(provider='cerebras', base_url='https://api.cerebras.ai/v1', api_key_var='CEREBRAS_API_KEY'...', 'Llama-4-Maverick-17B-128E-Instruct-FP8': 'Llama-4-Maverick-Instruct'}, test_exclusions={}, self_hosted=False), ...}
case = {'case_id': 'tools_type_invalid', 'input': {'messages': [{'content': 'Which planet do humans live on?', 'role': 'user'}], 'tools': [{'type': 'invalid'}]}, 'output': {'error': {'status_code': 400}}}

    @pytest.mark.parametrize(
        "case",
        chat_completion_test_cases["test_chat_input_validation"]["test_params"]["case"],
        ids=case_id_generator,
    )
    def test_chat_non_streaming_error_handling(request, openai_client, model, provider, verification_config, case):
        test_name_base = get_base_test_name(request)
        if should_skip_test(verification_config, provider, model, test_name_base):
            pytest.skip(f"Skipping {test_name_base} for model {model} on provider {provider} based on config.")
    
        with pytest.raises(APIError) as e:
            openai_client.chat.completions.create(
                model=model,
                messages=case["input"]["messages"],
                stream=False,
                tool_choice=case["input"]["tool_choice"] if "tool_choice" in case["input"] else None,
                tools=case["input"]["tools"] if "tools" in case["input"] else None,
            )
>       assert case["output"]["error"]["status_code"] == e.value.status_code
E       AssertionError: assert 400 == 500
E        +  where 500 = InternalServerError('Error code: 500').status_code
E        +    where InternalServerError('Error code: 500') = <ExceptionInfo InternalServerError('Error code: 500') tblen=5>.value

llama_stack_evals/functional_tests/openai_api/test_chat_completion.py:686: AssertionError
_ test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] _

request = <FixtureRequest for <Function test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid]>>
openai_client = <openai.OpenAI object at 0x7fef7ee5eed0>
model = 'meta-llama/Llama-4-Scout-17B-16E-Instruct', provider = 'vllm'
verification_config = {'cerebras': ProviderConfig(provider='cerebras', base_url='https://api.cerebras.ai/v1', api_key_var='CEREBRAS_API_KEY'...', 'Llama-4-Maverick-17B-128E-Instruct-FP8': 'Llama-4-Maverick-Instruct'}, test_exclusions={}, self_hosted=False), ...}
case = {'case_id': 'tools_type_invalid', 'input': {'messages': [{'content': 'Which planet do humans live on?', 'role': 'user'}], 'tools': [{'type': 'invalid'}]}, 'output': {'error': {'status_code': 400}}}

    @pytest.mark.parametrize(
        "case",
        chat_completion_test_cases["test_chat_input_validation"]["test_params"]["case"],
        ids=case_id_generator,
    )
    def test_chat_streaming_error_handling(request, openai_client, model, provider, verification_config, case):
        test_name_base = get_base_test_name(request)
        if should_skip_test(verification_config, provider, model, test_name_base):
            pytest.skip(f"Skipping {test_name_base} for model {model} on provider {provider} based on config.")
    
        with pytest.raises(APIError) as e:
            response = openai_client.chat.completions.create(
                model=model,
                messages=case["input"]["messages"],
                stream=True,
                tool_choice=case["input"]["tool_choice"] if "tool_choice" in case["input"] else None,
                tools=case["input"]["tools"] if "tools" in case["input"] else None,
            )
            for _chunk in response:
                pass
>       assert str(case["output"]["error"]["status_code"]) in e.value.message
E       AssertionError: assert '400' in 'Error code: 500'
E        +  where '400' = str(400)
E        +  and   'Error code: 500' = InternalServerError('Error code: 500').message
E        +    where InternalServerError('Error code: 500') = <ExceptionInfo InternalServerError('Error code: 500') tblen=5>.value

llama_stack_evals/functional_tests/openai_api/test_chat_completion.py:709: AssertionError
==================================== short test summary info ====================================
FAILED llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] - AssertionError: assert 400 == 500
FAILED llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] - AssertionError: assert '400' in 'Error code: 500'
====================== 2 failed, 22 passed, 5 xfailed, 5 xpassed in 40.78s ======================

@wukaixingxp wukaixingxp marked this pull request as ready for review May 17, 2025 01:25
@wukaixingxp wukaixingxp requested a review from yeqcharlotte May 17, 2025 01:25
@wukaixingxp wukaixingxp changed the title WIP: fix_llama4_tool_call [Frontend] Update llama4 pythonic jinja template and llama4_pythonic parser May 17, 2025
@wukaixingxp
Copy link
Contributor Author

Tested using pytest -s -vv tests/tool_use --models llama4 --extended by @yeqcharlotte, only two errors but they are from model not from our code:

  1. call get_weather instead of get_current_weather
>       assert tool_calls[0].function.name == WEATHER_TOOL["function"]["name"]
E       AssertionError: assert 'get_weather' == 'get_current_weather'
E         
E         - get_current_weather
E         + get_weather

tests/tool_use/test_tool_calls.py:41: AssertionError
  1. Call get_weather when there is no tool:
>       assert choice.finish_reason != "tool_calls"  # "stop" or "length"
E       assert 'tool_calls' != 'tool_calls'
E        +  where 'tool_calls' = Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='chatcmpl-tool-09cfbb311c784573ab7cc83d14be7eb8', function=Function(arguments='{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}', name='get_current_weather'), type='function')], reasoning_content=None), stop_reason=None).finish_reason

tests/tool_use/test_tool_calls.py:154: AssertionError

@wukaixingxp wukaixingxp requested a review from bbrowning May 20, 2025 22:26
Signed-off-by: Kai Wu <kaiwu@meta.com>
@wukaixingxp wukaixingxp force-pushed the fix_llama4_tool_call branch from 5be9c0e to 0471aed Compare May 20, 2025 23:06
Copy link
Collaborator

@yeqcharlotte yeqcharlotte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please fix formatting of examples/tool_chat_template_llama4_pythonic.jinja. it's using different indentation everywhere.

Signed-off-by: Kai Wu <kaiwu@meta.com>
@DarkLight1337
Copy link
Member

cc @aarnphm @houseroad

Signed-off-by: Kai Wu <kaiwu@meta.com>
Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also fix the pre-commit problem here?

few questions, but in general this looks good. Thanks for adding this.

Signed-off-by: Kai Wu <kaiwu@meta.com>
Signed-off-by: Kai Wu <kaiwu@meta.com>
Signed-off-by: Kai Wu <kaiwu@meta.com>
@wukaixingxp wukaixingxp requested a review from yeqcharlotte May 21, 2025 17:02
Signed-off-by: Kai Wu <kaiwu@meta.com>
@wukaixingxp
Copy link
Contributor Author

Can you also fix the pre-commit problem here?

few questions, but in general this looks good. Thanks for adding this.

Yes.. I just fixed it. Now waiting for the final PR run to be completed

Copy link
Collaborator

@yeqcharlotte yeqcharlotte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for adding the doc and unit tests!

@DarkLight1337
Copy link
Member

DarkLight1337 commented May 21, 2025

We have temporarily paused all non-essential PRs to fix the CI. Please merge from main after #18418 is resolved

@houseroad houseroad changed the title [Frontend] Update llama4 pythonic jinja template and llama4_pythonic parser [Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser May 21, 2025
@wukaixingxp
Copy link
Contributor Author

We have temporarily paused all non-essential PRs to fix the CI. Please merge from main after #18418 is resolved

@DarkLight1337 Can we me this PR now given that the issue has been fixed? CC: @yeqcharlotte @houseroad

@houseroad
Copy link
Collaborator

Can we rebase to main?

{%- endif %}
{%- if not tools_in_user_message is defined %}
{%- set tools_in_user_message = false %}
{%- set tool_definition = tool_definition ~ (tools | tojson(indent=4)) %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing this chat template locally, there is a logic bug here that results in the tool_definition never making its way into the actual prompt. The tools from the ChatCompletion request come in as a tools variable, but we only set the tool_definition value if there's a custom_tools value passed in.

The line {%- set tool_definition = tool_definition ~ (tools | tojson(indent=4)) %} needs to move outside of this if statement block, and should happen after we check and set tools to none if not defined. Here's how the first few lines should look:

{{- bos_token }}
{%- if custom_tools is defined and custom_tools%}
    {%- set tools = custom_tools %}
{%- endif %}
{%- if not tools is defined %}
    {%- set tools = none %}
{%- endif %}
{%- set tool_definition = tool_definition ~ (tools | tojson(indent=4)) %}

Without this change, the actual function definitions never get inserted into the model's prompt which results in it failing the majority of the bfclv3-api tests from llama-stack-evals repo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for testing. You are right.. fixing this now!

Signed-off-by: Kai Wu <kaiwu@meta.com>
@wukaixingxp
Copy link
Contributor Author

Thanks to @bbrowning help, I just fixed a bug on the template, now tested with llama-stack-eval

================================================================ FAILURES =================================================================
__________________ test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] ___________________

request = <FixtureRequest for <Function test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid]>>
openai_client = <openai.OpenAI object at 0x7fe732426f90>, model = 'meta-llama/Llama-4-Scout-17B-16E-Instruct', provider = 'vllm'
verification_config = {'cerebras': ProviderConfig(provider='cerebras', base_url='https://api.cerebras.ai/v1', api_key_var='CEREBRAS_API_KEY'...verick-17B-128E-Instruct-FP8', canonical_id='Llama-4-Maverick-Instruct')], test_exclusions={}, self_hosted=False), ...}
case = {'case_id': 'tools_type_invalid', 'input': {'messages': [{'content': 'Which planet do humans live on?', 'role': 'user'}], 'tools': [{'type': 'invalid'}]}, 'output': {'error': {'status_code': 400}}}

    @pytest.mark.parametrize(
        "case",
        chat_completion_test_cases["test_chat_input_validation"]["test_params"]["case"],
        ids=case_id_generator,
    )
    def test_chat_non_streaming_error_handling(request, openai_client, model, provider, verification_config, case):
        test_name_base = get_base_test_name(request)
        if should_skip_test(verification_config, provider, model, test_name_base):
            pytest.skip(f"Skipping {test_name_base} for model {model} on provider {provider} based on config.")
    
        with pytest.raises(APIError) as e:
            openai_client.chat.completions.create(
                model=model,
                messages=case["input"]["messages"],
                stream=False,
                tool_choice=case["input"]["tool_choice"] if "tool_choice" in case["input"] else None,
                tools=case["input"]["tools"] if "tools" in case["input"] else None,
            )
>       assert case["output"]["error"]["status_code"] == e.value.status_code
E       AssertionError: assert 400 == 500
E        +  where 500 = InternalServerError('Error code: 500').status_code
E        +    where InternalServerError('Error code: 500') = <ExceptionInfo InternalServerError('Error code: 500') tblen=5>.value

llama_stack_evals/functional_tests/openai_api/test_chat_completion.py:686: AssertionError
____________________ test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] _____________________

request = <FixtureRequest for <Function test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid]>>
openai_client = <openai.OpenAI object at 0x7fe73258d6d0>, model = 'meta-llama/Llama-4-Scout-17B-16E-Instruct', provider = 'vllm'
verification_config = {'cerebras': ProviderConfig(provider='cerebras', base_url='https://api.cerebras.ai/v1', api_key_var='CEREBRAS_API_KEY'...verick-17B-128E-Instruct-FP8', canonical_id='Llama-4-Maverick-Instruct')], test_exclusions={}, self_hosted=False), ...}
case = {'case_id': 'tools_type_invalid', 'input': {'messages': [{'content': 'Which planet do humans live on?', 'role': 'user'}], 'tools': [{'type': 'invalid'}]}, 'output': {'error': {'status_code': 400}}}

    @pytest.mark.parametrize(
        "case",
        chat_completion_test_cases["test_chat_input_validation"]["test_params"]["case"],
        ids=case_id_generator,
    )
    def test_chat_streaming_error_handling(request, openai_client, model, provider, verification_config, case):
        test_name_base = get_base_test_name(request)
        if should_skip_test(verification_config, provider, model, test_name_base):
            pytest.skip(f"Skipping {test_name_base} for model {model} on provider {provider} based on config.")
    
        with pytest.raises(APIError) as e:
            response = openai_client.chat.completions.create(
                model=model,
                messages=case["input"]["messages"],
                stream=True,
                tool_choice=case["input"]["tool_choice"] if "tool_choice" in case["input"] else None,
                tools=case["input"]["tools"] if "tools" in case["input"] else None,
            )
            for _chunk in response:
                pass
>       assert str(case["output"]["error"]["status_code"]) in e.value.message
E       AssertionError: assert '400' in 'Error code: 500'
E        +  where '400' = str(400)
E        +  and   'Error code: 500' = InternalServerError('Error code: 500').message
E        +    where InternalServerError('Error code: 500') = <ExceptionInfo InternalServerError('Error code: 500') tblen=5>.value

llama_stack_evals/functional_tests/openai_api/test_chat_completion.py:709: AssertionError
========================================================= short test summary info =========================================================
FAILED llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_non_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] - AssertionError: assert 400 == 500
FAILED llama_stack_evals/functional_tests/openai_api/test_chat_completion.py::test_chat_streaming_error_handling[meta-llama/Llama-4-Scout-17B-16E-Instruct-tools_type_invalid] - AssertionError: assert '400' in 'Error code: 500'
=========================================== 2 failed, 22 passed, 5 xfailed, 5 xpassed in 30.94s ===========================================
Tests failed for provider=vllm, model=meta-llama/Llama-4-Scout-17B-16E-Instruct with exit code 1

@houseroad houseroad merged commit c91fe7b into vllm-project:main May 22, 2025
64 checks passed
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
…pythonic parser (vllm-project#17917)

Signed-off-by: Kai Wu <kaiwu@meta.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
gshtras added a commit to ROCm/vllm that referenced this pull request May 27, 2025
* Add files via uploadAdd fused MoE kernel tuning configs (fp8_w8a8) for DeepSeek V3/R1 on a single-node 8x NVIDIA H20 96GB setup (vllm-project#18337)

* [Misc] Fix typo (vllm-project#18330)

* Neuron up mistral (vllm-project#18222)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>

* fix CUDA_check redefinition in vllm-project#17918 (vllm-project#18287)

Signed-off-by: Lucia Fang <fanglu@fb.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>

* [neuron] fix authorization issue (vllm-project#18364)

Signed-off-by: Liangfu Chen <liangfc@amazon.com>

* [Misc] Allow `AutoWeightsLoader` to skip loading weights with specific substr in name (vllm-project#18358)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Core] [Bugfix]: tensor parallel with prompt embeds (vllm-project#18171)

Signed-off-by: Nan2018 <nan@protopia.ai>
Co-authored-by: Andrew Sansom <andrew@protopia.ai>

* [release] Change dockerhub username for TPU release (vllm-project#18389)

* [Bugfix] fix adding bias twice in ipex GPTQ quantization (vllm-project#18363)

Signed-off-by: rand-fly <randfly@outlook.com>

* [doc] update env variable export (vllm-project#18391)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Add LoRA code owner (vllm-project#18387)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Update cpu.txt (vllm-project#18398)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [CI] Add mteb testing to test the accuracy of the embedding model (vllm-project#17175)

* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text (vllm-project#18407)

Co-authored-by: 松灵 <wpf272043@alibaba-inc.com>

* [Misc] refactor prompt embedding examples (vllm-project#18405)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Minor] Rename quantization nvfp4 to modelopt_fp4 (vllm-project#18356)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Model] use AutoWeightsLoader for bloom (vllm-project#18300)

Signed-off-by: calvin chen <120380290@qq.com>

* [Kernel] update comment for KV shape in unified triton attn (vllm-project#18099)

Signed-off-by: haochengxia <xhc_1007@163.com>

* fix:Build torch wheel inline rather than picking from nightly (vllm-project#18351)

Signed-off-by: Dilip Gowda Bhagavan <dilip.bhagavan@ibm.com>

* [TPU] Re-enable the Pallas MoE kernel (vllm-project#18025)

Signed-off-by: Michael Goin <mgoin64@gmail.com>

* [Bugfix] config.head_dim is now explicitly set to None (vllm-project#18432)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Bug] Fix moe_sum signature (vllm-project#18440)

Signed-off-by: Bill Nell <bnell@redhat.com>

* Revert "[Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text (vllm-project#18407)" (vllm-project#18456)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix][Failing Test] Fix nixl connector test when promt size < block size (vllm-project#18429)

Signed-off-by: wwl2755 <wangwenlong2755@gmail.com>

* [Misc] MultiConnector._connectors type (vllm-project#18423)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Frontend] deprecate `--device` arg (vllm-project#18399)

Signed-off-by: Kebe <mail@kebe7jun.com>

* [V1] Fix general plugins not loaded in engine for multiproc (vllm-project#18326)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Misc] refactor disaggregated-prefill-v1 example (vllm-project#18474)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Failing Test] Fix test_events.py (vllm-project#18460)

Signed-off-by: rabi <ramishra@redhat.com>

* [MODEL] FalconH1 (vllm-project#18406)

Signed-off-by: dhia.rhaiem <dhia.rhaiem@tii.ae>
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Ilyas Chahed <ilyas.chahed@tii.ae>
Co-authored-by: Jingwei Zuo <jingwei.zuo@tii.ae>

* [Doc] fix arg docstring in linear layers (vllm-project#18410)

Signed-off-by: giantcroc <1204449533@qq.com>

* [Bugfix] Reduce moe_sum test size to avoid OOM (vllm-project#18484)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [Build] fix Dockerfile shell (vllm-project#18402)

* [Misc] Update deprecation message for `--enable-reasoning` (vllm-project#18404)

* [ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 (vllm-project#17004)

Signed-off-by: Hosang Yoon <hosang.yoon@amd.com>

* Remove incorrect env value

* Revert "[v1] Support multiple KV cache groups in GPU model runner (vllm-project#17945) (vllm-project#18459)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [FEAT][ROCm] Upgrade AITER MLA v1 backend (vllm-project#18338)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>

* [Bugfix] Consistent ascii handling in tool parsers (vllm-project#17704)

Signed-off-by: Sebastian Schönnenbeck <sebastian.schoennenbeck@comma-soft.com>

* [FalconH1] Fix output dtype in RMSNorm fallback path for Falcon-H1 (e.g. 0.5B) (vllm-project#18500)

Signed-off-by: dhia.rhaiem <dhia.rhaiem@tii.ae>
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Ilyas Chahed <ilyas.chahed@tii.ae>
Co-authored-by: Jingwei Zuo <jingwei.zuo@tii.ae>

* [MISC] update project urls in pyproject.toml (vllm-project#18519)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [CI] Fix race condition with StatelessProcessGroup.barrier (vllm-project#18506)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Intialize io_thread_pool attribute in the beginning. (vllm-project#18331)

Signed-off-by: rabi <ramishra@redhat.com>

* [Bugfix] Inconsistent token calculation compared to HF in llava family (vllm-project#18479)

Signed-off-by: jaycha <jaycha@ncsoft.com>

* [BugFix][DP] Send DP wave completion only from `dp_rank==0` (vllm-project#18502)

Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: kourosh hakhamaneshi <kourosh@anyscale.com>

* [Bugfix][Model] Make Olmo2Model weight loading return loaded weights (vllm-project#18504)

Signed-off-by: Shane A <shanea@allenai.org>

* [Bugfix] Fix LoRA test (vllm-project#18518)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Fix invalid JSON in example args (vllm-project#18527)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Neuron] Update Dockerfile.neuron to use latest neuron release (2.23) (vllm-project#18512)

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>

* Update default neuron config for speculation (vllm-project#18274)

Signed-off-by: Elaine Zhao <elaineyz@amazon.com>
Co-authored-by: Shashwat Srijan <sssrijan@amazon.com>
Co-authored-by: Aakash Shetty <sheaak@amazon.com>

* Order sequence ids + config update to support specifying custom quantization layers (vllm-project#18279)

Signed-off-by: Elaine Zhao <elaineyz@amazon.com>
Co-authored-by: Tailin Pan <tailinpa@amazon.com>
Co-authored-by: Rishabh Rajesh <rishyraj@amazon.com>
Co-authored-by: Yishan McNabb <yishanm@amazon.com>
Co-authored-by: Patrick Lange <patlange@amazon.com>
Co-authored-by: Maxwell Goldberg <mgld@amazon.com>
Co-authored-by: Aakash Shetty <sheaak@amazon.com>

* [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text (vllm-project#18526)

Co-authored-by: 松灵 <wpf272043@alibaba-inc.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Add kwargs to RequestOutput __init__ to be forward compatible (vllm-project#18513)

Signed-off-by: Linkun <github@lkchen.net>

* [CI/Build] Update bamba test model location (vllm-project#18544)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Doc] Support --stream arg in openai_completion_client.py script (vllm-project#18388)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Bugfix] Use random hidden states in dummy sampler run (vllm-project#18543)

Signed-off-by: Bowen Wang <abmfy@icloud.com>

* [Doc] Add stream flag for chat completion example (vllm-project#18524)

Signed-off-by: calvin chen <120380290@qq.com>

* [BugFix][CPU] Fix x86 SHM distributed module initialization (vllm-project#18536)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* [Misc] improve Automatic Prefix Caching example (vllm-project#18554)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Call `ndarray.tobytes()` directly instead of `ndarray.data.tobytes()` (vllm-project#18347)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [Bugfix] make `test_openai_schema.py` pass (vllm-project#18224)

Signed-off-by: David Xia <david@davidxia.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Platform] Move platform check to right place (vllm-project#18470)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [Compile][Platform] Make PiecewiseBackend pluggable and extendable (vllm-project#18076)

Signed-off-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>

* [Build/CI] Fix CUDA 11.8 build (vllm-project#17679)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Co-authored-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Tool] Add NIXL installation script (vllm-project#18172)

Signed-off-by: Linkun <github@lkchen.net>

* [V1][Spec Decode][Bugfix] Load quantize weights for EAGLE (vllm-project#18290)

* [Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser (vllm-project#17917)

Signed-off-by: Kai Wu <kaiwu@meta.com>

* [Frontend] [Core] Add Tensorizer support for V1, LoRA adapter serialization and deserialization (vllm-project#17926)

Signed-off-by: Sanger Steel <sangersteel@gmail.com>

* [AMD] [P/D] Compute num gpus for ROCm correctly in run_accuracy_test.sh (vllm-project#18568)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* Re-submit: Fix: Proper RGBA -> RGB conversion for PIL images. (vllm-project#18569)

Signed-off-by: Chenheli Hua <huachenheli@outlook.com>

* [V1][Spec Decoding] Use model_loader.get_model() to load models (vllm-project#18273)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* Enable hybrid attention models for Transformers backend (vllm-project#18494)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Misc] refactor: simplify input validation and num_requests handling in _convert_v1_inputs (vllm-project#18482)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [BugFix] Increase TP execute_model timeout (vllm-project#18558)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Set `KVTransferConfig.engine_id` in post_init (vllm-project#18576)

Signed-off-by: Linkun Chen <github@lkchen.net>

* [Spec Decode] Make EAGLE3 draft token ID mapping optional (vllm-project#18488)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Neuron] Remove bypass on EAGLEConfig and add a test (vllm-project#18514)

Signed-off-by: Elaine Zhao <elaineyz@amazon.com>

* [Bugfix][Benchmarks] Fix a benchmark of deepspeed-mii backend to use api_key (vllm-project#17291)

Signed-off-by: Teruaki Ishizaki <teruaki.ishizaki@ntt.com>

* [Misc] Replace `cuda` hard code with `current_platform` (vllm-project#16983)

Signed-off-by: shen-shanshan <467638484@qq.com>

* [Hardware] correct method signatures for HPU,ROCm,XPU (vllm-project#18551)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal (vllm-project#18034)

Signed-off-by: Ronald Xu <ronaldxu@amazon.com>

* [Feature]Add async tensor parallelism using compilation pass (vllm-project#17882)

Signed-off-by: cascade812 <cascade812@outlook.com>

* [Doc] Update quickstart and install for cu128 using `--torch-backend=auto` (vllm-project#18505)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Feature][V1]: suupports cached_tokens in response usage (vllm-project#18149)

Co-authored-by: simon-mo <xmo@berkeley.edu>

* [Bugfix] Add half type support in reshape_and_cache_cpu_impl on x86 cpu platform (vllm-project#18430)

Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Co-authored-by: Yuqi Zhang <yuqizhang@google.com>

* Migrate docs from Sphinx to MkDocs (vllm-project#18145)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Revert "[V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal (vllm-project#18034)" (vllm-project#18600)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix][Model] Fix baichuan model loader for tp (vllm-project#18597)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [V0][Bugfix] Fix parallel sampling performance regression when guided decoding is enabled (vllm-project#17731)

Signed-off-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>

* Add myself as docs code owner (vllm-project#18605)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Hardware][CPU] Update intel_extension_for_pytorch 2.7.0 and move to `requirements/cpu.txt`  (vllm-project#18542)

Signed-off-by: Kay Yan <kay.yan@daocloud.io>

* [CI] fix kv_cache_type argument (vllm-project#18594)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Doc] Fix indent of contributing to vllm (vllm-project#18611)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* Replace `{func}` with mkdocs style links (vllm-project#18610)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [CI/Build] Fix V1 flag being set in entrypoints tests (vllm-project#18598)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Fix examples with code blocks in docs (vllm-project#18609)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] Fix transformers model impl ignored for mixtral quant (vllm-project#18602)

Signed-off-by: Tristan Leclercq <tristanleclercq@gmail.com>

* Include private attributes in API documentation (vllm-project#18614)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Misc] add Haystack integration (vllm-project#18601)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Build/CI] Fixup CUDA compiler version check for CUDA_SUPPORTED_ARCHS (vllm-project#18579)

* [Doc] Fix markdown list indentation for MkDocs rendering (vllm-project#18620)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Doc] Use a different color for the announcement (vllm-project#18616)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Refactor pplx init logic to make it modular (prepare for deepep) (vllm-project#18200)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Fix figures in design doc (vllm-project#18612)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Docs] Change mkdocs to not use directory urls (vllm-project#18622)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [v1] Redo "Support multiple KV cache groups in GPU model runner (vllm-project#17945)" (vllm-project#18593)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Doc] fix list formatting (vllm-project#18624)

Signed-off-by: David Xia <david@davidxia.com>

* [Doc] Fix top-level API links/docs (vllm-project#18621)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Avoid documenting dynamic / internal modules (vllm-project#18626)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Fix broken links and unlinked docs, add shortcuts to home sidebar (vllm-project#18627)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1] Support Deepseek MTP (vllm-project#18435)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
Co-authored-by: Rui Qiao <ruisearch42@gmail.com>

* Use prebuilt FlashInfer x86_64 PyTorch 2.7 CUDA 12.8 wheel for CI (vllm-project#18537)

Signed-off-by: Huy Do <huydhn@gmail.com>

* [CI] Enable test_initialization to run on V1 (vllm-project#16736)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Update references to doc files (vllm-project#18637)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [ModelOpt] Introduce VLLM_MAX_TOKENS_PER_EXPERT_FP4_MOE env var to control blockscale tensor allocation (vllm-project#18160)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Bugfix] Migrate to REGEX Library to prevent catastrophic backtracking (vllm-project#18454)

Signed-off-by: Crucifixion-Fxl <xmufxl@gmail.com>
Co-authored-by: Crucifixion-Fxl <xmufxl@gmail.com>

* [Bugfix][Nixl] Fix Preemption Bug (vllm-project#18631)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* config.py: Clarify that only local GGUF checkpoints are supported. (vllm-project#18623)

Signed-off-by: Mathieu Bordere <mathieu@letmetweakit.com>

* FIX MOE issue in AutoRound format (vllm-project#18586)

Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>

* [V1][Spec Decode] Small refactors to improve eagle bookkeeping performance (vllm-project#18424)

Signed-off-by: qizixi <qizixi@meta.com>

* [Frontend] improve vllm serve --help display (vllm-project#18643)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Model] Add support for Qwen2.5-Omni-7B-AWQ (Qwen2_5OmniForConditionalGeneration) (vllm-project#18647)

* [V1][Spec Decode] Support multi-layer eagle draft model (vllm-project#18030)

Signed-off-by: qizixi <qizixi@meta.com>

* [Doc] Update README links, mark external links (vllm-project#18635)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [MISC][pre-commit] Add pre-commit check for triton import (vllm-project#17716)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [Doc] Fix indentation problems in V0 Paged Attention docs (vllm-project#18659)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Add community links (vllm-project#18657)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] use AutoWeightsLoader for gpt2 (vllm-project#18625)

Signed-off-by: zt2370 <ztang2370@gmail.com>

* [Doc] Reorganize user guide (vllm-project#18661)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI/Build] `chmod +x` to `cleanup_pr_body.sh` (vllm-project#18650)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [MISC] typo fix and clean import (vllm-project#18664)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [BugFix] Fix import error for fused_moe (vllm-project#18642)

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>

* [CI] enforce import regex instead of re (vllm-project#18665)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* fix(regression): clone from reference items (vllm-project#18662)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [CI/Build] fix permission denied issue (vllm-project#18645)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [BugFix][Spec Decode] Improve Prefix Caching Logic in Speculative Decoding (vllm-project#18668)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [V1] Fix _pickle.PicklingError: Can't pickle <class 'transformers_modules.deepseek-ai.DeepSeek-V2-Lite... (vllm-project#18640)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [MISC] correct signature for LoaderFunction (vllm-project#18670)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Misc]Replace `cuda` hard code with `current_platform` in Ray (vllm-project#14668)

Signed-off-by: noemotiovon <757486878@qq.com>

* [Misc][ModelScope] Change to use runtime VLLM_USE_MODELSCOPE (vllm-project#18655)

Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>

* [VLM] Initialize video input support for InternVL models (vllm-project#18499)

Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Speed up the `kernels/quantization/` tests (vllm-project#18669)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [BUGFIX] catch subclass first for try...except (vllm-project#18672)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Misc] Reduce logs on startup (vllm-project#18649)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [doc] fix broken links (vllm-project#18671)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [doc] improve readability (vllm-project#18675)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix cpu usage and cache hit stats reporting on cpu environment (vllm-project#18674)

Signed-off-by: zzzyq <zhangyuqi94@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CI/build] fix no regex (vllm-project#18676)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] small improve (vllm-project#18680)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix profiling dummy data for Pixtral (vllm-project#18677)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Core][Multimodal] Convert PIL Image to array without data copy when hashing (vllm-project#18682)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [CI/Build][Doc] Update `gte-Qwen2-1.5B-instruct` usage (vllm-project#18683)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>

* [Misc] Fixed the abnormally high TTFT issue in the PD disaggregation example (vllm-project#18644)

Signed-off-by: zhaohaidao <zhaohaidao2008@hotmail.com>
Signed-off-by: zhaohaiyuan <zhaohaiyuan@xiaohongshu.com>
Co-authored-by: zhaohaiyuan <zhaohaiyuan@xiaohongshu.com>

* refactor: simplify request handler, use positive condition check for handler assignment (vllm-project#18690)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Bugfix] Fix the lm_head in gpt_bigcode in lora mode (vllm-project#6357)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <maxdebayser@gmail.com>

* [CI] add missing argument (vllm-project#18694)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [GH] Add issue template for reporting CI failures (vllm-project#18696)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Fix issue template format (vllm-project#18699)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix Mistral-format models with sliding window (vllm-project#18693)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI/Build] Replace `math.isclose` with `pytest.approx` (vllm-project#18703)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [CI] fix dump_input for str type (vllm-project#18697)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Model] Add support for YARN in NemotronNAS models (vllm-project#18427)

Signed-off-by: Nave Assaf <nassaf@nvidia.com>

* [CI/Build] Split pooling and generation extended language models tests in CI (vllm-project#18705)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Hardware][Intel-Gaudi] [CI/Build] Add tensor parallel size = 2 test to HPU CI (vllm-project#18709)

Signed-off-by: Lukasz Durejko <ldurejko@habana.ai>

* [Misc] add AutoGen integration (vllm-project#18712)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Bugfix]: handle hf-xet CAS error when loading Qwen3 weights in vLLM (vllm-project#18701)

* [Doc] Improve API docs (vllm-project#18713)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Move examples and further reorganize user guide (vllm-project#18666)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix Llama GGUF initialization (vllm-project#18717)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1][Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs (vllm-project#18608)

* Convert `examples` to `ruff-format` (vllm-project#18400)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Model][Gemma3] Simplify image input validation (vllm-project#18710)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [Misc] improve web section group title display (vllm-project#18684)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [V1][Quantization] Add CUDA graph compatible v1 GGUF support (vllm-project#18646)

Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Isotr0py <2037008807@qq.com>

* [Model][Gemma3] Cast image pixel values already on CPU (vllm-project#18732)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [FEAT] [ROCm] Upgrade AITER Fused MoE kernels. (vllm-project#18271)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Doc] Update OOT model docs (vllm-project#18742)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Update reproducibility doc and example (vllm-project#18741)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] improve docs (vllm-project#18734)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* feat(rocm-support): support mamba2 on rocm (vllm-project#18565)

Signed-off-by: Islam Almersawi <islam.almersawi@openinnovation.ai>
Co-authored-by: Islam Almersawi <islam.almersawi@openinnovation.ai>

* [Hardware][Intel-Gaudi] [CI/Build] Fix multiple containers using the same name in run-hpu-test.sh (vllm-project#18752)

Signed-off-by: Lukasz Durejko <ldurejko@habana.ai>

* [Doc] cleanup deprecated flag for doc (vllm-project#18715)

Signed-off-by: calvin chen <120380290@qq.com>

* Minor fix about MooncakeStoreConnector (vllm-project#18721)

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* [Build] fix cpu build missing libtbbmalloc.so (vllm-project#18744)

Signed-off-by: Kebe <mail@kebe7jun.com>

* [BUG FIX] minicpm (vllm-project#18739)

Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>

* [Doc]  Convert Sphinx directives ( `{class}`, `{meth}`, `{attr}`, ...) to MkDocs format for better documentation linking (vllm-project#18663)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [CI/Build] Remove imports of built-in `re` (vllm-project#18750)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1][Metrics] Add API for accessing in-memory Prometheus metrics (vllm-project#17010)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* Disable prefix cache by default for benchmark (vllm-project#18639)

Signed-off-by: cascade812 <cascade812@outlook.com>

* optimize get_kv_cache_torch_dtype (vllm-project#18531)

Signed-off-by: idellzheng <idellzheng@tencent.com>

* [Core] Automatically cast multi-modal input dtype (vllm-project#18756)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Mistral tool calling when content is list (vllm-project#18729)

Signed-off-by: mgoin <mgoin64@gmail.com>

---------

Signed-off-by: Satyajith Chilappagari <satchill@amazon.com>
Signed-off-by: Lucia Fang <fanglu@fb.com>
Signed-off-by: Liangfu Chen <liangfc@amazon.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Nan2018 <nan@protopia.ai>
Signed-off-by: rand-fly <randfly@outlook.com>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: haochengxia <xhc_1007@163.com>
Signed-off-by: Dilip Gowda Bhagavan <dilip.bhagavan@ibm.com>
Signed-off-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: Bill Nell <bnell@redhat.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: wwl2755 <wangwenlong2755@gmail.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Kebe <mail@kebe7jun.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: rabi <ramishra@redhat.com>
Signed-off-by: dhia.rhaiem <dhia.rhaiem@tii.ae>
Signed-off-by: giantcroc <1204449533@qq.com>
Signed-off-by: Hosang Yoon <hosang.yoon@amd.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: Sebastian Schönnenbeck <sebastian.schoennenbeck@comma-soft.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: jaycha <jaycha@ncsoft.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Shane A <shanea@allenai.org>
Signed-off-by: Elaine Zhao <elaineyz@amazon.com>
Signed-off-by: Linkun <github@lkchen.net>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: David Xia <david@davidxia.com>
Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: Kai Wu <kaiwu@meta.com>
Signed-off-by: Sanger Steel <sangersteel@gmail.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Chenheli Hua <huachenheli@outlook.com>
Signed-off-by: Linkun Chen <github@lkchen.net>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Teruaki Ishizaki <teruaki.ishizaki@ntt.com>
Signed-off-by: shen-shanshan <467638484@qq.com>
Signed-off-by: Ronald Xu <ronaldxu@amazon.com>
Signed-off-by: cascade812 <cascade812@outlook.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Signed-off-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Signed-off-by: Kay Yan <kay.yan@daocloud.io>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: Tristan Leclercq <tristanleclercq@gmail.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Crucifixion-Fxl <xmufxl@gmail.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Mathieu Bordere <mathieu@letmetweakit.com>
Signed-off-by: wenhuach21 <wenhua.cheng@intel.com>
Signed-off-by: qizixi <qizixi@meta.com>
Signed-off-by: zt2370 <ztang2370@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: noemotiovon <757486878@qq.com>
Signed-off-by: zzzyq <zhangyuqi94@gmail.com>
Signed-off-by: zhaohaidao <zhaohaidao2008@hotmail.com>
Signed-off-by: zhaohaiyuan <zhaohaiyuan@xiaohongshu.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Max de Bayser <maxdebayser@gmail.com>
Signed-off-by: Nave Assaf <nassaf@nvidia.com>
Signed-off-by: Lukasz Durejko <ldurejko@habana.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Islam Almersawi <islam.almersawi@openinnovation.ai>
Signed-off-by: baoloongmao <baoloongmao@tencent.com>
Signed-off-by: huangyuxiang03 <huangyx0321@gmail.com>
Signed-off-by: idellzheng <idellzheng@tencent.com>
Co-authored-by: sunyicode0012 <116338547+sunyicode0012@users.noreply.github.com>
Co-authored-by: Gong Shufan <2624542821@qq.com>
Co-authored-by: Satyajith Chilappagari <satchill@amazon.com>
Co-authored-by: Lucia Fang <116399278+luccafong@users.noreply.github.com>
Co-authored-by: Lucia (Lu) Fang <fanglu@meta.com>
Co-authored-by: Liangfu Chen <liangfc@amazon.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Nan Qin <nan@protopia.ai>
Co-authored-by: Andrew Sansom <andrew@protopia.ai>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Random Fly <renfei8@live.cn>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: 燃 <wulipc@163.com>
Co-authored-by: 松灵 <wpf272043@alibaba-inc.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Percy <xhc_1007@163.com>
Co-authored-by: Dilip Gowda Bhagavan <110233170+dilipgb@users.noreply.github.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: wwl2755 <wangwenlong2755@gmail.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Rabi Mishra <ramishra@redhat.com>
Co-authored-by: Dhia Eddine Rhaiem <163106757+dhiaEddineRhaiem@users.noreply.github.com>
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Ilyas Chahed <ilyas.chahed@tii.ae>
Co-authored-by: Jingwei Zuo <jingwei.zuo@tii.ae>
Co-authored-by: GiantCroc <1204449533@qq.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Hosang <156028780+hyoon1@users.noreply.github.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Sebastian Schoennenbeck <sebastian.schoennenbeck@comma-soft.com>
Co-authored-by: Ning Xie <andy.xning@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: youngrok cha <line0930@gmail.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: kourosh hakhamaneshi <kourosh@anyscale.com>
Co-authored-by: Shane A <shanea@allenai.org>
Co-authored-by: aws-elaineyz <elaineyz@amazon.com>
Co-authored-by: Shashwat Srijan <sssrijan@amazon.com>
Co-authored-by: Aakash Shetty <sheaak@amazon.com>
Co-authored-by: Tailin Pan <tailinpa@amazon.com>
Co-authored-by: Rishabh Rajesh <rishyraj@amazon.com>
Co-authored-by: Yishan McNabb <yishanm@amazon.com>
Co-authored-by: Patrick Lange <patlange@amazon.com>
Co-authored-by: Maxwell Goldberg <mgld@amazon.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: lkchen <github@lkchen.net>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: David Xia <david@davidxia.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Kai Wu <kaiwu@meta.com>
Co-authored-by: Sanger Steel <sangersteel@gmail.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Chenheli Hua <huachenheli@outlook.com>
Co-authored-by: Benjamin Chislett <chislett.ben@gmail.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Teruaki Ishizaki <tell.ishi@gmail.com>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: RonaldBXu <72748153+RonaldBXu@users.noreply.github.com>
Co-authored-by: cascade <cascade812@outlook.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: simon-mo <xmo@berkeley.edu>
Co-authored-by: Yuqi Zhang <zhangyuqi94@gmail.com>
Co-authored-by: Yuqi Zhang <yuqizhang@google.com>
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Kay Yan <kay.yan@daocloud.io>
Co-authored-by: Tristan Leclercq <49700633+tristanleclercq@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Jiayi Yao <82156730+YaoJiayi@users.noreply.github.com>
Co-authored-by: Rui Qiao <ruisearch42@gmail.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Feng XiaoLong <79261065+Crucifixion-Fxl@users.noreply.github.com>
Co-authored-by: Crucifixion-Fxl <xmufxl@gmail.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: Mathieu Borderé <mathieu@bordere.org>
Co-authored-by: Wenhua Cheng <wenhua.cheng@intel.com>
Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com>
Co-authored-by: Yuanhao WU <Nalkey@users.noreply.github.com>
Co-authored-by: ztang2370 <ztang2370@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: Chenguang Li <757486878@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: AlexZhao <zhaohaidao2008@hotmail.com>
Co-authored-by: zhaohaiyuan <zhaohaiyuan@xiaohongshu.com>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: Naveassaf <55059536+Naveassaf@users.noreply.github.com>
Co-authored-by: Łukasz Durejko <lukasz.durejko@intel.com>
Co-authored-by: dylan <xuhao296@qq.com>
Co-authored-by: almersawi <43927639+almersawi@users.noreply.github.com>
Co-authored-by: Islam Almersawi <islam.almersawi@openinnovation.ai>
Co-authored-by: Łukasz Durejko <ldurejko@habana.ai>
Co-authored-by: maobaolong <baoloongmao@tencent.com>
Co-authored-by: Shawn Huang <57223022+huangyuxiang03@users.noreply.github.com>
Co-authored-by: huangyuxiang03 <huangyx0321@gmail.com>
Co-authored-by: chunxiaozheng <55471457+chunxiaozheng@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation frontend ready ONLY add when PR is ready to merge/full CI is needed tool-calling
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

7 participants