feat: create a simple chat interface for all supported providers#31
feat: create a simple chat interface for all supported providers#31billybonks wants to merge 1 commit intomainfrom
Conversation
this helps us easily test with different providers.
There was a problem hiding this comment.
Reviewed Changes
Managerbot reviewed 5 out of 9 changed files in this pull request and generated 11 comments.
Files not reviewed (4)
- Gemfile
- Gemfile.lock
- sample/simple_chat/.env-sample
- sample/simple_chat/README.md
Comments suppressed due to low confidence (0)
|
|
||
| begin | ||
| response = LlmGateway::Client.chat(@model, @conversation_history, api_key: @api_key) | ||
|
|
There was a problem hiding this comment.
Consider handling API response structure variations more robustly. The current nested if-else structure to extract content is fragile and may break with API changes.
| else | ||
| "No response received from the model" | ||
| end | ||
| rescue => e |
There was a problem hiding this comment.
The generic rescue clause will catch all exceptions. Consider rescuing specific exceptions to avoid masking unexpected errors.
| def get_api_key_for_model(model) | ||
| if model.include?('claude') | ||
| ENV['ANTHROPIC_API_KEY'] | ||
| elsif model.include?('llama') || model.include?('meta-llama') || model.include?('openai/gpt-oss') |
There was a problem hiding this comment.
There's duplicate logic between this method and the load_configuration method in SimpleChatRunner. Consider extracting this to a shared utility.
| @@ -0,0 +1,56 @@ | |||
| require 'dotenv/load' | |||
| require_relative 'simple_chat' | |||
| require 'debug' | |||
There was a problem hiding this comment.
The 'debug' gem is required but not used in the code. Consider removing it if not needed.
| def load_configuration | ||
| model = ENV['DEFAULT_MODEL'] || 'claude-opus-4-20250514' | ||
|
|
||
| api_key = if model.include?('claude') |
There was a problem hiding this comment.
This API key detection logic is duplicated in chat_client.rb. Consider refactoring to avoid duplication.
| handle_down_key | ||
| when 10, 13 # Enter key | ||
| handle_enter_key | ||
| when Curses::KEY_BACKSPACE, 127, 8 |
There was a problem hiding this comment.
Backspace handling should account for different backspace key codes (8, 127, KEY_BACKSPACE) but the implementation only removes one character regardless of which key was pressed.
| @messages << { role: 'user', content: message } | ||
|
|
||
| begin | ||
| response = @client.send_message(message) |
There was a problem hiding this comment.
Consider showing a loading indicator while waiting for the API response, as LLM responses can take several seconds to return.
| def draw_messages(y, x, height, width) | ||
| return if @messages.empty? | ||
|
|
||
| visible_messages = @messages.last([ height, @messages.length ].min) |
There was a problem hiding this comment.
This implementation only shows the most recent messages that fit on screen. Consider implementing proper scrolling to view message history.
| prefix = "Error: " | ||
| end | ||
|
|
||
| content = (prefix + message[:content]).slice(0, width) |
There was a problem hiding this comment.
Long messages will be truncated without any indication. Consider implementing word wrapping or a scrolling mechanism for long messages.
| Curses.setpos(y + 2 + i, x) | ||
| if i == @dropdown_index | ||
| Curses.attron(Curses.color_pair(3)) # Highlight | ||
| end |
There was a problem hiding this comment.
The dropdown only shows the first 10 models regardless of which 10 are visible. Consider implementing proper scrolling in the dropdown menu when there are more items than can be displayed.
this helps us easily test with different providers.