Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Add dedicated tool calling page to docs #10554

Merged
merged 9 commits into from
Dec 10, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Comments
  • Loading branch information
mgoin authored Nov 26, 2024
commit 3f65b75de3b69cf771417c61f6897c5d5d6414a7
7 changes: 4 additions & 3 deletions docs/source/models/tool_calling.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ vllm serve meta-llama/Llama-3.1-8B-Instruct \
--chat-template examples/tool_chat_template_llama3_json.jinja
```

Next make a request to extract structured data using function calling:
Next, make a request to the model that should result in it using the available tools:

```python
from openai import OpenAI
Expand Down Expand Up @@ -67,7 +67,7 @@ This example demonstrates:
- Making a request with `tool_choice="auto"`
- Handling the structured response and executing the corresponding function

You can also specify a particular function using named function calling by setting `tool_choice={"type": "function", "function": {"name": "get_weather"}}`.
You can also specify a particular function using named function calling by setting `tool_choice={"type": "function", "function": {"name": "get_weather"}}`. Note that this will use the guided decoding backend - so the first time this is used, there will be several seconds of latency (or more) as the FSM is compiled for the first time before it is cached for subsequent requests.

Remember that it's the callers responsibility to:
1. Define appropriate tools in the request
Expand All @@ -81,7 +81,8 @@ vLLM supports named function calling in the chat completion API by default. It d
enabled by default, and will work with any supported model. You are guaranteed a validly-parsable function call - not a
high-quality one.
mgoin marked this conversation as resolved.
Show resolved Hide resolved

vLLM will use guided decoding to ensure the response matches the tool parameter object defined by the JSON schema in the `tools` parameter.
vLLM will use guided decoding to ensure the response matches the tool parameter object defined by the JSON schema in the `tools` parameter.
For best results, we recommend ensuring that the expected output format / schema is specified in the prompt to ensure that the model's intended generation is aligned with the schema that it's being forced to generated by the guided decoding backend.

To use a named function, you need to define the functions in the `tools` parameter of the chat completion request, and
specify the `name` of one of the tools in the `tool_choice` parameter of the chat completion request.
Expand Down