Skip to content

Commit

Permalink
Improve gradio demo (lm-sys#2323)
Browse files Browse the repository at this point in the history
  • Loading branch information
merrymercy authored Aug 27, 2023
1 parent 346c173 commit da8d0cd
Show file tree
Hide file tree
Showing 6 changed files with 9 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/commands/webserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ cd fastchat_logs/server0
export OPENAI_API_KEY=
export ANTHROPIC_API_KEY=
python3 -m fastchat.serve.gradio_web_server_multi --controller http://localhost:21001 --concurrency 10 --add-chatgpt --add-claude --add-palm --anony-only --elo ~/elo_results/elo_results_20230619.pkl --leaderboard-table-file ~/elo_results/leaderboard_table_20230619.csv
python3 -m fastchat.serve.gradio_web_server_multi --controller http://localhost:21001 --concurrency 10 --add-chatgpt --add-claude --add-palm --anony-only --elo ~/elo_results/elo_results_20230802.pkl --leaderboard-table-file ~/elo_results/leaderboard_table_20230802.csv --register ~/elo_results/register_oai_models.json
python3 backup_logs.py
```
Expand Down
2 changes: 1 addition & 1 deletion fastchat/model/model_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -1009,7 +1009,7 @@ def match(self, model_path: str):

def get_default_conv_template(self, model_path: str) -> Conversation:
model_path = model_path.lower()
if "13b" in model_path or "30b" in model_path:
if "13b" in model_path or "30b" in model_path or "70b" in model_path:
return get_conv_template("vicuna_v1.1")
else:
# TODO: use the recommended template for 7B
Expand Down
4 changes: 2 additions & 2 deletions fastchat/serve/api_provider.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ def openai_api_stream_iter(
):
import openai

openai.api_base = api_base or openai.api_base
openai.api_key = api_key or openai.api_key
openai.api_base = api_base or "https://api.openai.com/v1"
openai.api_key = api_key or os.environ["OPENAI_API_KEY"]

# Make requests
gen_params = {
Expand Down
4 changes: 2 additions & 2 deletions fastchat/serve/gradio_block_arena_anony.py
Original file line number Diff line number Diff line change
Expand Up @@ -369,10 +369,10 @@ def bot_response_multi(

def build_side_by_side_ui_anony(models):
notice_markdown = """
# ⚔️ Chatbot Arena ⚔️
# ⚔️ Chatbot Arena ⚔️ : Benchmarking LLMs in the Wild
### Rules
- Chat with two anonymous models side-by-side and vote for which one is better!
- You can do multiple rounds of conversations before voting.
- You can do multiple turns of conversations before voting.
- The names of the models will be revealed after your vote. Conversations with identity keywords (e.g., ChatGPT, Bard, Vicuna) or any votes after the names are revealed will not count towards the leaderboard.
- Click "Clear history" to start a new round.
- | [Blog](https://lmsys.org/blog/2023-05-03-arena/) | [GitHub](https://github.com/lm-sys/FastChat) | [Paper](https://arxiv.org/abs/2306.05685) | [Twitter](https://twitter.com/lmsysorg) | [Discord](https://discord.gg/HSWAKCrnFx) |
Expand Down
4 changes: 2 additions & 2 deletions fastchat/serve/gradio_block_arena_named.py
Original file line number Diff line number Diff line change
Expand Up @@ -298,11 +298,11 @@ def flash_buttons():

def build_side_by_side_ui_named(models):
notice_markdown = """
# ⚔️ Chatbot Arena ⚔️
# ⚔️ Chatbot Arena ⚔️ : Benchmarking LLMs in the Wild
### Rules
- Chat with two models side-by-side and vote for which one is better!
- You pick the models you want to chat with.
- You can do multiple rounds of conversations before voting.
- You can do multiple turns of conversations before voting.
- Click "Clear history" to start a new round.
- | [Blog](https://lmsys.org/blog/2023-05-03-arena/) | [GitHub](https://github.com/lm-sys/FastChat) | [Paper](https://arxiv.org/abs/2306.05685) | [Twitter](https://twitter.com/lmsysorg) | [Discord](https://discord.gg/HSWAKCrnFx) |
Expand Down
1 change: 1 addition & 0 deletions fastchat/serve/gradio_web_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ def get_model_list(
models += ["claude-2", "claude-instant-1"]
if add_palm:
models += ["palm-2"]
models = list(set(models))

priority = {k: f"___{i:02d}" for i, k in enumerate(model_info)}
models.sort(key=lambda x: priority.get(x, x))
Expand Down

0 comments on commit da8d0cd

Please sign in to comment.