Skip to content

Commit

Permalink
fix default model
Browse files Browse the repository at this point in the history
  • Loading branch information
zchk0 committed Dec 29, 2024
1 parent a7a086c commit 5d7e3a8
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ ALLOWED_TELEGRAM_USER_IDS=USER_ID_1,USER_ID_2
# ENABLE_TRANSCRIPTION=true
# ENABLE_VISION=true
# PROXY=http://localhost:8080
# OPENAI_MODEL=gpt-3.5-turbo
# OPENAI_MODEL=gpt-4o
# OPENAI_BASE_URL=https://example.com/v1/
# ASSISTANT_PROMPT="You are a helpful assistant."
# SHOW_USAGE=false
Expand Down Expand Up @@ -54,5 +54,5 @@ ALLOWED_TELEGRAM_USER_IDS=USER_ID_1,USER_ID_2
# TTS_PRICES=0.015,0.030
# BOT_LANGUAGE=en
# ENABLE_VISION_FOLLOW_UP_QUESTIONS="true"
# VISION_MODEL="gpt-4-vision-preview"
# VISION_MODEL="gpt-4o"
# COINMARKETCAP_KEY="YOUR-API-KEY"
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,14 +93,14 @@ Check out the [Budget Manual](https://github.com/n3d1117/chatgpt-telegram-bot/di
| `PROXY` | Proxy to be used for OpenAI and Telegram bot (e.g. `http://localhost:8080`) | - |
| `OPENAI_PROXY` | Proxy to be used only for OpenAI (e.g. `http://localhost:8080`) | - |
| `TELEGRAM_PROXY` | Proxy to be used only for Telegram bot (e.g. `http://localhost:8080`) | - |
| `OPENAI_MODEL` | The OpenAI model to use for generating responses. You can find all available models [here](https://platform.openai.com/docs/models/) | `gpt-3.5-turbo` |
| `OPENAI_MODEL` | The OpenAI model to use for generating responses. You can find all available models [here](https://platform.openai.com/docs/models/) | `gpt-4o` |
| `OPENAI_BASE_URL` | Endpoint URL for unofficial OpenAI-compatible APIs (e.g., LocalAI or text-generation-webui) | Default OpenAI API URL |
| `ASSISTANT_PROMPT` | A system message that sets the tone and controls the behavior of the assistant | `You are a helpful assistant.` |
| `SHOW_USAGE` | Whether to show OpenAI token usage information after each response | `false` |
| `STREAM` | Whether to stream responses. **Note**: incompatible, if enabled, with `N_CHOICES` higher than 1 | `true` |
| `MAX_TOKENS` | Upper bound on how many tokens the ChatGPT API will return | `1200` for GPT-3, `2400` for GPT-4 |
| `VISION_MAX_TOKENS` | Upper bound on how many tokens vision models will return | `300` for gpt-4-vision-preview |
| `VISION_MODEL` | The Vision to Speech model to use. Allowed values: `gpt-4-vision-preview` | `gpt-4-vision-preview` |
| `VISION_MAX_TOKENS` | Upper bound on how many tokens vision models will return | `300` for gpt-4o |
| `VISION_MODEL` | The Vision to Speech model to use. Allowed values: `gpt-4o` | `gpt-4o` |
| `ENABLE_VISION_FOLLOW_UP_QUESTIONS` | If true, once you send an image to the bot, it uses the configured VISION_MODEL until the conversation ends. Otherwise, it uses the OPENAI_MODEL to follow the conversation. Allowed values: `true` or `false` | `true` |
| `MAX_HISTORY_SIZE` | Max number of messages to keep in memory, after which the conversation will be summarised to avoid excessive token usage | `15` |
| `MAX_CONVERSATION_AGE_MINUTES` | Maximum number of minutes a conversation should live since the last message, after which the conversation will be reset | `180` |
Expand Down
2 changes: 1 addition & 1 deletion bot/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ def main():
exit(1)

# Setup configurations
model = os.environ.get('OPENAI_MODEL', 'gpt-3.5-turbo')
model = os.environ.get('OPENAI_MODEL', 'gpt-4o')
functions_available = are_functions_available(model=model)
max_tokens_default = default_max_tokens(model=model)
openai_config = {
Expand Down

0 comments on commit 5d7e3a8

Please sign in to comment.