-
-
Notifications
You must be signed in to change notification settings - Fork 0
feat: LiteLLM integration, tool system, and TUI UX improvements #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…g sanitization Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Signed-off-by: Zachary BENSALEM <zachary@qredence.ai>
|
Caution Review failedFailed to post review comments 📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThis PR introduces LiteLLM provider support, a Coding Agent tools framework with file execution and permission controls, async file mention formatting, and restructured provider/LLM configuration. It updates CLI commands, settings UI, streaming logic, and adds comprehensive test coverage for new features. Multiple planning documents in Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant CLI as CLI/index.tsx
participant Parser as Tool Parser
participant Permission as Permission Handler
participant Executor as Tool Executor
participant LLM as LLM Service
participant FileSystem as File System
User->>CLI: Submit message (tools enabled)
CLI->>LLM: formatInput + buildToolSystemPrompt
LLM->>CLI: streaming response with tool calls
CLI->>Parser: parseToolCalls(response)
Parser->>CLI: return ToolCall[]
loop For each tool call
CLI->>Permission: resolveToolPermission(call, settings)
alt Permission = deny
Permission->>CLI: reject tool call
else Permission = ask
Permission->>CLI: prompt user for confirmation
User->>CLI: approve/deny
else Permission = allow
Permission->>CLI: auto-approve
end
alt Approved
CLI->>Executor: executeToolCall(call, cwd, limits)
Executor->>FileSystem: read_file/write_file/list_dir/run_command
FileSystem->>Executor: result with output/error
Executor->>CLI: return ToolResult
CLI->>CLI: append tool result to history
end
end
CLI->>LLM: continueWithToolResults(history)
LLM->>CLI: final response
CLI->>User: display message
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Summary of ChangesHello @Zochory, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the application's capabilities by integrating LiteLLM for more versatile LLM provider management and introducing a powerful, permission-controlled tool execution system for coding agents. These core enhancements are complemented by numerous user experience improvements across the TUI, making the interface more intuitive and informative, and laying the groundwork for more advanced agentic workflows. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This is an impressive pull request that introduces significant new functionality and improves the user experience. The integration of LiteLLM as the default provider is a great move towards flexibility. The new tool execution system is comprehensive and well-designed, with important security considerations like permission controls and path restrictions. The TUI has been greatly enhanced with a much-improved settings menu, better suggestion lists, and more informative status updates. The code is well-refactored into new llm and tools modules, and the addition of tests for the new logic is commendable. I have one minor suggestion regarding some unreachable code.
src/commandHandlers.ts
Outdated
| const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`; | ||
| return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces LiteLLM as the default LLM provider, implements a comprehensive tool execution system for coding agents, and delivers significant UX improvements across the terminal UI. The changes enable qlaw-cli to work with multiple model providers through a unified interface, execute file system and shell operations with permission controls, and provide a more intuitive user experience.
Key Changes:
- LiteLLM Integration: New provider resolution system with auto-detection, LiteLLM-style model identifiers (e.g.,
openai/gpt-4o-mini), and environment variable configuration - Tool System: Complete implementation of
read_file,list_dir,write_file, andrun_commandtools with permission policies (allow/ask/deny) and safety checks for external directories and doom loops - Enhanced UX: Improved settings menu with select prompts, better suggestion list with match highlighting, async
@filemention handler with content inlining, and updated status line showing provider/tools status
Reviewed changes
Copilot reviewed 55 out of 56 changed files in this pull request and generated 21 comments.
Show a summary per file
| File | Description |
|---|---|
src/llm/config.ts |
New module for LLM provider resolution, authentication headers, and endpoint URL handling |
src/llm/models.ts |
Model list parsing supporting JSON arrays and comma/newline-separated formats |
src/llm/providerDefaults.ts |
Provider-specific default value application when switching providers |
src/llm/input.ts |
Extracted prompt building logic for the Responses API |
src/tools/index.ts |
Tool execution engine with file/directory/command operations and safety limits |
src/tools/permissions.ts |
Permission resolution system with external directory and doom loop detection |
src/tools/prompt.ts |
System prompt generation for tool-aware conversations |
src/mentionHandlers.ts |
Async @file mention handler that reads and inlines file contents |
src/index.tsx |
Tool queue management, execution loop, and integration with streaming responses |
src/services/streamingService.ts |
Updated to use new LLM config resolver and pass settings through |
src/hooks/useStreaming.ts |
Added onComplete callback for tool detection after streaming |
src/hooks/useSettings.ts |
Enhanced with provider/tools settings and select-style prompts |
src/hooks/useKeyboardShortcuts.ts |
Added left/right navigation for settings sections |
src/components/SuggestionList.tsx |
Match highlighting, action hints, and expanded detail panel |
src/components/StatusLine.tsx |
Shows provider, model, and tools status |
src/components/SettingsMenu.tsx |
Tabbed sections with detail panel and improved navigation |
src/components/PromptOverlay.tsx |
Added select prompt type with option list |
src/components/MessageList.tsx |
Multi-segment rendering for mixed text and code blocks |
src/components/InputArea.tsx |
Added isFocused prop for better focus management |
src/commands.ts |
Added /provider and /tools commands with metadata |
src/commandHandlers.ts |
Select-based value pickers, provider switching, and tool permission management |
tests/*.test.ts |
Comprehensive test coverage for tools, permissions, mentions, models, and LLM config |
docs/API-INTEGRATION.md |
LiteLLM proxy documentation and configuration examples |
docs/ARCHITECTURE.md |
Architecture overview including new LLM and tools modules |
README.md |
Updated with LiteLLM examples, provider commands, and tool documentation |
AGENTS.md |
Tool system documentation and usage examples |
package.json |
Dependency updates (@OpenTui 0.1.63, react 19.2.3, @types/bun 1.3.5) |
bridge/requirements.txt |
Updated Python dependencies to latest versions |
bridge/run.sh |
LiteLLM environment variable fallback support |
.env.example |
Added LiteLLM configuration variables |
src/commandHandlers.ts
Outdated
|
|
||
| const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`; | ||
| return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } }; |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handleSettingCommand function has unreachable code. Lines 459-460 define a message and return statement that can never be executed because all code paths before this point either return early or open a prompt. This dead code should be removed to improve code maintainability.
| const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`; | |
| return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } }; |
| export function isExternalPath(path: string, cwd: string): boolean { | ||
| const resolved = resolve(cwd, path); | ||
| const root = resolve(cwd); | ||
| const rel = relative(root, resolved); | ||
| if (!rel) return false; | ||
| return rel.startsWith("..") || isAbsolute(rel); | ||
| } |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The isExternalPath function has inconsistent behavior. When relative(root, resolved) returns an empty string (meaning the paths are identical), the function returns false. However, the subsequent check rel.startsWith("..") could be true for legitimate relative paths like "../sibling", but then isAbsolute(rel) is checked which would be false for such paths. The logic seems correct but could be clearer. Consider adding a comment explaining the empty string case, or restructuring to: return !rel || rel.startsWith("..") || isAbsolute(rel);
| const lower = label.toLowerCase(); | ||
| const q = match.toLowerCase(); | ||
| const idx = lower.indexOf(q); | ||
| if (idx === -1) return [label, "", ""]; | ||
| return [label.slice(0, idx), label.slice(idx, idx + match.length), label.slice(idx + match.length)]; |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The splitMatch function performs case-insensitive matching by converting both strings to lowercase, but then slices the original label using indices from the lowercase version. This works fine for ASCII but could produce incorrect results for Unicode characters where the lowercase version has a different byte length than the original (e.g., Turkish İ/i). Consider using String.prototype.toLowerCase() consistently or using a case-insensitive search that preserves original indices.
| const lower = label.toLowerCase(); | |
| const q = match.toLowerCase(); | |
| const idx = lower.indexOf(q); | |
| if (idx === -1) return [label, "", ""]; | |
| return [label.slice(0, idx), label.slice(idx, idx + match.length), label.slice(idx + match.length)]; | |
| const q = match.toLowerCase(); | |
| const labelLen = label.length; | |
| const matchLen = match.length; | |
| let idx = -1; | |
| for (let i = 0; i <= labelLen - matchLen; i++) { | |
| const segment = label.slice(i, i + matchLen); | |
| if (segment.toLowerCase() === q) { | |
| idx = i; | |
| break; | |
| } | |
| } | |
| if (idx === -1) return [label, "", ""]; | |
| return [ | |
| label.slice(0, idx), | |
| label.slice(idx, idx + matchLen), | |
| label.slice(idx + matchLen), | |
| ]; |
| applyValue: (value, ctx) => { | ||
| if (value === AUTO_SELECT_VALUE) { | ||
| ctx.setSettings((prev) => ({ ...prev, provider: undefined })); | ||
| return; | ||
| } | ||
| ctx.setSettings((prev) => applyProviderDefaults(prev, value)); | ||
| }, | ||
| } |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The AUTO_SELECT_VALUE constant is defined but the auto-detection logic isn't fully implemented. When the user selects "Auto (detect)", the provider is set to undefined (line 501), but the resolveLlmConfig function's inference logic (line 50-58) will default to "litellm" rather than truly detecting based on endpoint patterns. This creates inconsistent behavior between "Auto" and explicitly selecting "litellm". Either fix the inference logic to properly detect, or update the UI description to clarify that "Auto" defaults to LiteLLM.
| function guessLanguage(path: string): string { | ||
| const ext = extname(path).toLowerCase().replace(".", ""); | ||
| if (!ext) return "text"; | ||
| if (ext === "ts") return "typescript"; | ||
| if (ext === "tsx") return "tsx"; | ||
| if (ext === "js") return "javascript"; | ||
| if (ext === "jsx") return "jsx"; | ||
| if (ext === "py") return "python"; | ||
| if (ext === "json") return "json"; | ||
| if (ext === "md") return "markdown"; | ||
| if (ext === "sh") return "bash"; | ||
| return ext; | ||
| } |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The guessLanguage function doesn't handle multi-part extensions (e.g., .test.ts, .config.js). It will return "ts" for both .ts and .test.ts, which is correct, but the function doesn't handle cases like .d.ts which might benefit from specific handling. Consider adding support for common multi-part extensions or documenting this limitation.
| function handleSettingCommand( | ||
| settingKey: "model" | "endpoint" | "apiKey" | "afBridgeBaseUrl" | "afModel" | "foundryEndpoint", | ||
| settingKey: "model" | "provider" | "endpoint" | "apiKey" | "afBridgeBaseUrl" | "afModel" | "foundryEndpoint", | ||
| promptMessage: string, | ||
| confirmationFormatter: (value: string) => string, | ||
| args: string | undefined, | ||
| context: CommandContext, | ||
| shouldMaskValue: boolean = false | ||
| options?: { | ||
| shouldMaskValue?: boolean; | ||
| placeholder?: string; | ||
| applyValue?: (value: string, context: CommandContext) => void; | ||
| selectOptions?: PromptSelectOption[]; | ||
| } | ||
| ): CommandResult { | ||
| const { settings, setSettings } = context; | ||
| const { settings, setSettings, setMessages, setPrompt } = context; | ||
| const val = args?.trim(); | ||
|
|
||
| const applyValue = | ||
| options?.applyValue || | ||
| ((value: string, ctx: CommandContext) => { | ||
| ctx.setSettings((prev) => ({ ...prev, [settingKey]: value })); | ||
| }); | ||
|
|
||
| if (val) { | ||
| setSettings((prev) => ({ ...prev, [settingKey]: val })); | ||
| applyValue(val, context); | ||
| const systemMsg: Message = { | ||
| id: generateUniqueId(), | ||
| role: "system", | ||
| content: confirmationFormatter(val), | ||
| timestamp: new Date(), | ||
| }; | ||
| return { systemMessage: systemMsg }; | ||
| } else { | ||
| const currentValue = settings[settingKey] || "Not set"; | ||
| const masked = shouldMaskValue && typeof currentValue === "string" | ||
| ? (currentValue ? "***" + currentValue.slice(-4) : "Not set") | ||
| : currentValue; | ||
| const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`; | ||
| return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } }; | ||
| } | ||
|
|
||
| const storedValue = settings[settingKey]; | ||
| const displayValue = storedValue || "Not set"; | ||
| const masked = | ||
| options?.shouldMaskValue && typeof displayValue === "string" | ||
| ? displayValue | ||
| ? "***" + displayValue.slice(-4) | ||
| : "Not set" | ||
| : displayValue; | ||
|
|
||
| const openInputPrompt = () => { | ||
| if (!setPrompt) return; | ||
| const defaultValue = | ||
| options?.shouldMaskValue || typeof storedValue !== "string" | ||
| ? "" | ||
| : storedValue; | ||
| setPrompt({ | ||
| type: "input", | ||
| message: promptMessage, | ||
| defaultValue, | ||
| placeholder: options?.placeholder || "Type and press Enter", | ||
| onConfirm: (value: string) => { | ||
| const trimmed = value.trim(); | ||
| if (trimmed) { | ||
| applyValue(trimmed, context); | ||
| if (setMessages) { | ||
| const systemMsg: Message = { | ||
| id: generateUniqueId(), | ||
| role: "system", | ||
| content: confirmationFormatter(trimmed), | ||
| timestamp: new Date(), | ||
| }; | ||
| setMessages((prev) => [...prev, systemMsg]); | ||
| } | ||
| } | ||
| setPrompt(null); | ||
| }, | ||
| onCancel: () => setPrompt(null), | ||
| }); | ||
| }; | ||
|
|
||
| if (setPrompt && options?.selectOptions && options.selectOptions.length > 0) { | ||
| const selectedIndex = selectIndexForValue(options.selectOptions, storedValue); | ||
| setPrompt({ | ||
| type: "select", | ||
| message: promptMessage, | ||
| options: options.selectOptions, | ||
| selectedIndex, | ||
| onSelect: (option) => { | ||
| if (option.value === CUSTOM_SELECT_VALUE) { | ||
| openInputPrompt(); | ||
| return; | ||
| } | ||
| const resolvedValue = option.value ?? option.name; | ||
| if (resolvedValue) { | ||
| applyValue(resolvedValue, context); | ||
| if (setMessages) { | ||
| const systemMsg: Message = { | ||
| id: generateUniqueId(), | ||
| role: "system", | ||
| content: confirmationFormatter(resolvedValue), | ||
| timestamp: new Date(), | ||
| }; | ||
| setMessages((prev) => [...prev, systemMsg]); | ||
| } | ||
| } | ||
| setPrompt(null); | ||
| }, | ||
| onCancel: () => setPrompt(null), | ||
| }); | ||
| return { shouldReturn: true }; | ||
| } | ||
|
|
||
| openInputPrompt(); | ||
| return { shouldReturn: true }; | ||
|
|
||
| const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`; | ||
| return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } }; | ||
| } |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The command handler pattern with select options creates deeply nested promise chains and closures. The handleSettingCommand function now has multiple levels of callbacks (openInputPrompt, setPrompt with onSelect, applyValue) which makes the control flow hard to follow and could lead to state management issues. Consider refactoring this into a more linear state machine or extracting the prompt handling into a separate hook that manages the prompt lifecycle.
| useEffect(() => { | ||
| if (!toolQueue && toolQueueIndex > 0 && !prompt) { | ||
| setToolQueueIndex(0); | ||
| continueWithToolResults(); | ||
| } | ||
| }, [toolQueue, toolQueueIndex, prompt, continueWithToolResults]); |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The continueWithToolResults callback is included in the useEffect dependency array at line 291, but this callback itself depends on many other values (settings, mode, isProcessing, etc.) and is recreated on every render due to the dependencies in its useCallback. This could cause unnecessary effect re-runs. Consider either: (1) memoizing the callback with stable dependencies, (2) restructuring the logic to avoid this circular dependency, or (3) using a ref to store the callback if it doesn't need to trigger re-renders.
|
|
||
| function splitIntoSegments(content: string): Segment[] { | ||
| const segments: Segment[] = []; | ||
| const regex = /```(\w+)?(?:\n|\s)([\s\S]*?)```/g; |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code block regex pattern /```(\w+)?(?:\n|\s)([\s\S]*?)```/g requires either a newline or space after the language identifier, which means code blocks formatted as ```json{"foo":"bar"}``` (without whitespace) won't be matched. While this might be intentional to enforce proper formatting, it could lead to unexpected behavior where valid markdown code blocks aren't rendered as code. Consider making the whitespace optional: /```(\w+)?\s*([\s\S]*?)```/g
| const regex = /```(\w+)?(?:\n|\s)([\s\S]*?)```/g; | |
| const regex = /```(\w+)?\s*([\s\S]*?)```/g; |
src/commandHandlers.ts
Outdated
| } | ||
| ): CommandResult { | ||
| const { settings, setSettings } = context; | ||
| const { settings, setSettings, setMessages, setPrompt } = context; |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable setSettings.
| const { settings, setSettings, setMessages, setPrompt } = context; | |
| const { settings, setMessages, setPrompt } = context; |
| const masked = | ||
| options?.shouldMaskValue && typeof displayValue === "string" | ||
| ? displayValue | ||
| ? "***" + displayValue.slice(-4) | ||
| : "Not set" | ||
| : displayValue; |
Copilot
AI
Dec 28, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The initial value of masked is unused, since it is always overwritten.
Resolved conflicts in src/commandHandlers.ts by accepting incoming changes from main branch (PR #17). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Description
This PR introduces LiteLLM as the default LLM provider, implements a comprehensive tool execution system for coding agents, and delivers significant UX improvements across the TUI.
Type of Change
Changes Made
read_file,list_dir,write_file,run_commandtools with permission controlssrc/llm/modules for provider resolution, model parsing, and defaults@filemention with file content inliningTesting
Test Configuration
Test Cases
bun run typecheck)bun run test)Checklist
Additional Notes
LITELLM_MODELSenvironment variable