-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Support Message API for chatbot and chatinterface (#8422)
* first commit * Add code * Tests + code * lint * Add code * notebook * add changeset * type * Add client test * type * Add code * Chatbot type * Add code * test chatbot * fix e2e test * js tests * Consolidate Error and Tool message. Allow Messages in postprocess * Rename to messages * fix tests * notebook clean * More tests and messages * add changeset * notebook * client test * Fix issues * Chatbot docs * add changeset * Add image * Add img tag * Address comments * Add code * Revert chatinterface streaming change. Use title in metadata. Address pngwn comments * Add code * changelog highlight --------- Co-authored-by: gradio-pr-bot <gradio-pr-bot@users.noreply.github.com>
- Loading branch information
1 parent
936c713
commit 4221290
Showing
37 changed files
with
1,868 additions
and
668 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
--- | ||
"@gradio/chatbot": minor | ||
"@gradio/tootils": minor | ||
"gradio": minor | ||
"website": minor | ||
--- | ||
|
||
highlight: | ||
|
||
#### Support message format in chatbot 💬 | ||
|
||
`gr.Chatbot` and `gr.ChatInterface` now support the [Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api#messages-api), which is fully compatible with LLM API providers such as Hugging Face Text Generation Inference, OpenAI's chat completions API, and Llama.cpp server. | ||
|
||
Building Gradio applications around these LLM solutions is now even easier! | ||
|
||
`gr.Chatbot` and `gr.ChatInterface` now have a `msg_format` parameter that can accept two values - `'tuples'` and `'messages'`. If set to `'tuples'`, the default chatbot data format is expected. If set to `'messages'`, a list of dictionaries with `content` and `role` keys is expected. See below - | ||
|
||
```python | ||
def chat_greeter(msg, history): | ||
history.append({"role": "assistant", "content": "Hello!"}) | ||
return history | ||
``` | ||
|
||
Additionally, gradio now exposes a `gr.ChatMessage` dataclass you can use for IDE type hints and auto completion. | ||
|
||
<img width="852" alt="image" src="https://github.com/freddyaboulton/freddyboulton/assets/41651716/d283e8f3-b194-466a-8194-c7e697dca9ad"> | ||
|
||
|
||
#### Tool use in Chatbot 🛠️ | ||
|
||
The Gradio Chatbot can now natively display tool usage and intermediate thoughts common in Agent and chain-of-thought workflows! | ||
|
||
If you are using the new "messages" format, simply add a `metadata` key with a dictionary containing a `title` key and `value`. This will display the assistant message in an expandable message box to show the result of a tool or intermediate step. | ||
|
||
```python | ||
import gradio as gr | ||
from gradio import ChatMessage | ||
import time | ||
|
||
def generate_response(history): | ||
history.append(ChatMessage(role="user", content="What is the weather in San Francisco right now?")) | ||
yield history | ||
time.sleep(0.25) | ||
history.append(ChatMessage(role="assistant", | ||
content="In order to find the current weather in San Francisco, I will need to use my weather tool.") | ||
) | ||
yield history | ||
time.sleep(0.25) | ||
|
||
history.append(ChatMessage(role="assistant", | ||
content="API Error when connecting to weather service.", | ||
metadata={"title": "💥 Error using tool 'Weather'"}) | ||
) | ||
yield history | ||
time.sleep(0.25) | ||
|
||
history.append(ChatMessage(role="assistant", | ||
content="I will try again", | ||
)) | ||
yield history | ||
time.sleep(0.25) | ||
|
||
history.append(ChatMessage(role="assistant", | ||
content="Weather 72 degrees Fahrenheit with 20% chance of rain.", | ||
metadata={"title": "🛠️ Used tool 'Weather'"} | ||
)) | ||
yield history | ||
time.sleep(0.25) | ||
|
||
history.append(ChatMessage(role="assistant", | ||
content="Now that the API succeeded I can complete my task.", | ||
)) | ||
yield history | ||
time.sleep(0.25) | ||
|
||
history.append(ChatMessage(role="assistant", | ||
content="It's a sunny day in San Francisco with a current temperature of 72 degrees Fahrenheit and a 20% chance of rain. Enjoy the weather!", | ||
)) | ||
yield history | ||
|
||
|
||
with gr.Blocks() as demo: | ||
chatbot = gr.Chatbot(msg_format="messages") | ||
button = gr.Button("Get San Francisco Weather") | ||
button.click(generate_response, chatbot, chatbot) | ||
|
||
if __name__ == "__main__": | ||
demo.launch() | ||
``` | ||
|
||
|
||
|
||
![tool-box-demo](https://github.com/freddyaboulton/freddyboulton/assets/41651716/cf73ecc9-90ac-42ce-bca5-768e0cc00a48) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
101 changes: 101 additions & 0 deletions
101
demo/chatbot_core_components_simple/messages_testcase.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
import gradio as gr | ||
import random | ||
|
||
# Chatbot demo with multimodal input (text, markdown, LaTeX, code blocks, image, audio, & video). Plus shows support for streaming text. | ||
|
||
|
||
color_map = { | ||
"harmful": "crimson", | ||
"neutral": "gray", | ||
"beneficial": "green", | ||
} | ||
|
||
def html_src(harm_level): | ||
return f""" | ||
<div style="display: flex; gap: 5px;padding: 2px 4px;margin-top: -40px"> | ||
<div style="background-color: {color_map[harm_level]}; padding: 2px; border-radius: 5px;"> | ||
{harm_level} | ||
</div> | ||
</div> | ||
""" | ||
|
||
def print_like_dislike(x: gr.LikeData): | ||
print(x.index, x.value, x.liked) | ||
|
||
def add_message(history, message): | ||
for x in message["files"]: | ||
history.append({"role": "user", "content": {"path": x}}) | ||
if message["text"] is not None: | ||
history.append({"role": "user", "content": message['text']}) | ||
return history, gr.MultimodalTextbox(value=None, interactive=False) | ||
|
||
def bot(history, response_type): | ||
if response_type == "gallery": | ||
msg = {"role": "assistant", "content": gr.Gallery( | ||
["https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", | ||
"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"] | ||
) | ||
} | ||
elif response_type == "image": | ||
msg = {"role": "assistant", | ||
"content": gr.Image("https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png") | ||
} | ||
elif response_type == "video": | ||
msg = {"role": "assistant", | ||
"content": gr.Video("https://github.com/gradio-app/gradio/raw/main/demo/video_component/files/world.mp4") | ||
} | ||
elif response_type == "audio": | ||
msg = {"role": "assistant", | ||
"content": gr.Audio("https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav") | ||
} | ||
elif response_type == "html": | ||
msg = {"role": "assistant", | ||
"content": gr.HTML( | ||
html_src(random.choice(["harmful", "neutral", "beneficial"])) | ||
) | ||
} | ||
else: | ||
msg = {"role": "assistant", "content": "Cool!"} | ||
history.append(msg) | ||
return history | ||
|
||
|
||
with gr.Blocks(fill_height=True) as demo: | ||
chatbot = gr.Chatbot( | ||
elem_id="chatbot", | ||
bubble_full_width=False, | ||
scale=1, | ||
msg_format="messages" | ||
) | ||
response_type = gr.Radio( | ||
[ | ||
"image", | ||
"text", | ||
"gallery", | ||
"video", | ||
"audio", | ||
"html", | ||
], | ||
value="text", | ||
label="Response Type", | ||
) | ||
|
||
chat_input = gr.MultimodalTextbox( | ||
interactive=True, | ||
placeholder="Enter message or upload file...", | ||
show_label=False, | ||
) | ||
|
||
chat_msg = chat_input.submit( | ||
add_message, [chatbot, chat_input], [chatbot, chat_input] | ||
) | ||
bot_msg = chat_msg.then( | ||
bot, [chatbot, response_type], chatbot, api_name="bot_response" | ||
) | ||
bot_msg.then(lambda: gr.MultimodalTextbox(interactive=True), None, [chat_input]) | ||
|
||
chatbot.like(print_like_dislike, None, None) | ||
|
||
demo.queue() | ||
if __name__ == "__main__": | ||
demo.launch() |
Oops, something went wrong.