Skip to content

Conversation

@Zochory
Copy link
Member

@Zochory Zochory commented Dec 28, 2025

Description

This PR introduces LiteLLM as the default LLM provider, implements a comprehensive tool execution system for coding agents, and delivers significant UX improvements across the TUI.

Type of Change

  • New feature (non-breaking change which adds functionality)
  • Code refactoring
  • Test addition or update

Changes Made

  • LiteLLM Integration: Default to LiteLLM provider with auto-detection from environment variables
  • Tool System: Implement read_file, list_dir, write_file, run_command tools with permission controls
  • LLM Configuration: New src/llm/ modules for provider resolution, model parsing, and defaults
  • Mention Handlers: Async @file mention with file content inlining
  • Settings UX: Enhanced settings menu with select prompts for value selection
  • TUI Improvements: Better suggestion list, input hints, status line updates
  • Testing: Add comprehensive test coverage for LLM config, tools, and mentions
  • Docs: Update README, AGENTS.md, and API integration docs

Testing

Test Configuration

  • OS: macOS 15.0
  • Terminal: iTerm2
  • Bun Version: 1.2.10

Test Cases

  • Tested basic chat functionality
  • Tested command system
  • Tested mention system (@file inlining)
  • Tested keyboard shortcuts
  • Tested with LiteLLM endpoint
  • Tested tool execution (read, list, write, run)
  • Ran type checking (bun run typecheck)
  • Ran all tests (bun run test)

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings or errors
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published
  • I have checked my code and corrected any misspellings

Additional Notes

  • Tool execution requires user permission for write operations and external directory access
  • LiteLLM model list can be configured via LITELLM_MODELS environment variable
  • Backwards compatible with existing OpenAI/Azure configurations

Copilot AI review requested due to automatic review settings December 28, 2025 09:25
@coderabbitai
Copy link

coderabbitai bot commented Dec 28, 2025

Caution

Review failed

Failed to post review comments

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • Added LiteLLM provider support as an alternative to OpenAI.
    • Introduced coding agent tools for file operations and command execution with configurable permissions.
    • Added /provider and /tools commands for managing provider selection and tool settings.
    • New settings sections for providers and coding agent configuration.
  • Documentation

    • Updated documentation with LiteLLM configuration examples and tool usage guidance.
  • Chores

    • Updated dependencies across the project.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

This PR introduces LiteLLM provider support, a Coding Agent tools framework with file execution and permission controls, async file mention formatting, and restructured provider/LLM configuration. It updates CLI commands, settings UI, streaming logic, and adds comprehensive test coverage for new features. Multiple planning documents in .trae/documents/ are removed.

Changes

Cohort / File(s) Summary
Environment & Configuration
.env.example, storage.ts, bridge/run.sh, bridge/README.md
Add LiteLLM environment variables (LITELLM_BASE_URL, LITELLM_API_KEY, LITELLM_MODEL); introduce provider field and tools configuration (enabled, autoApprove, permissions) to AppSettings; fallback LITELLM vars to OPENAI vars in bridge startup.
LLM Configuration Modules
src/llm/config.ts, src/llm/providerDefaults.ts, src/llm/input.ts, src/llm/models.ts
New modules for LLM config resolution (provider inference, base URL normalization, auth headers), provider-specific defaults, input prompt formatting, and model list parsing.
Tools Framework
src/tools/index.ts, src/tools/permissions.ts, src/tools/prompt.ts
Implement tool execution engine (parseToolCalls, executeToolCall) for read_file, list_dir, write_file, run_command; permission resolution with allow/ask/deny modes and external/doom-loop checks; system prompt builder for tool-aware agents.
Command & CLI Updates
src/commands.ts, src/commandHandlers.ts, src/services/commandService.ts, src/cliHelp.ts
Add "provider" and "tools" commands; extend handleSettingCommand with select-based options, custom input, deduplication; introduce helper builders for model/endpoint/apiKey/bridge options; display provider and tools status in CLI output.
UI Components
src/components/InputArea.tsx, src/components/PromptOverlay.tsx, src/components/MessageList.tsx, src/components/SettingsMenu.tsx, src/components/StatusLine.tsx, src/components/SuggestionList.tsx
Add focus control (isFocused prop), select prompt support with index tracking, code block segmentation in messages, multi-section settings navigation, dynamic provider/tools status labels, match highlighting in suggestion list.
Hooks
src/hooks/useInputMode.ts, src/hooks/useKeyboardShortcuts.ts, src/hooks/useSettings.ts, src/hooks/useStreaming.ts
Propagate keywords/requiresValue metadata; add multi-section settings navigation; expand settings with Provider and Coding Agent sections; introduce onComplete callback for streaming with settings parameter.
API & Streaming
src/api.ts, src/services/streamingService.ts, src/sse.ts
Remove OPENAI_/AF_ exports; relocate buildResponsesInput to llm/input.ts; update streamResponseFromOpenAI to use resolveLlmConfig and new auth flow; broaden delta extraction in SSE processing.
Core Entry & Utilities
src/index.tsx, src/mentionHandlers.ts, src/suggest.ts, src/types.ts
Switch to async formatMessageWithMentionsAsync with file inlining; implement tool execution queue with permission prompts; add generateUniqueId and tool-related exports; introduce PromptSelectOption and PermissionMode types; extend UISuggestion with keywords/requiresValue.
Documentation
AGENTS.md, PLANS.md, README.md, docs/API-INTEGRATION.md, docs/ARCHITECTURE.md`
Add LiteLLM provider documentation, Coding Agent Tools sections, tool permission guidance, new ExecPlans for LiteLLM upgrade and bridge removal; document new src/llm and src/tools modules.
Dependencies & Tests
package.json, bridge/requirements*.txt, tests/*
Update @types/bun, @opentui/core/react, react; bump pytest, fastapi, uvicorn, and other packages; add unit tests for LLM config, provider defaults, model parsing, tools execution, permissions, file mention handling.
Deleted Documentation
.trae/documents/*
Remove planning files: Enhance AF Workflows Web UI, Enhance TUI CLI with Workflow Mode, Integrate Handoff Workflows, Progressive Enhancements, Publishable qlaw-cli v0.1.5, QLAW CLI Analysis/Cleanup, QLAW CLI Codebase Analysis, QLAW CLI Experience Enhancements.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant CLI as CLI/index.tsx
    participant Parser as Tool Parser
    participant Permission as Permission Handler
    participant Executor as Tool Executor
    participant LLM as LLM Service
    participant FileSystem as File System

    User->>CLI: Submit message (tools enabled)
    CLI->>LLM: formatInput + buildToolSystemPrompt
    LLM->>CLI: streaming response with tool calls
    CLI->>Parser: parseToolCalls(response)
    Parser->>CLI: return ToolCall[]
    
    loop For each tool call
        CLI->>Permission: resolveToolPermission(call, settings)
        alt Permission = deny
            Permission->>CLI: reject tool call
        else Permission = ask
            Permission->>CLI: prompt user for confirmation
            User->>CLI: approve/deny
        else Permission = allow
            Permission->>CLI: auto-approve
        end
        
        alt Approved
            CLI->>Executor: executeToolCall(call, cwd, limits)
            Executor->>FileSystem: read_file/write_file/list_dir/run_command
            FileSystem->>Executor: result with output/error
            Executor->>CLI: return ToolResult
            CLI->>CLI: append tool result to history
        end
    end
    
    CLI->>LLM: continueWithToolResults(history)
    LLM->>CLI: final response
    CLI->>User: display message
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

  • Add LiteLLM defaults and TUI tool UX #17: Directly overlapping code changes—both modify LiteLLM env vars, provider resolution, new src/llm and src/tools modules, commandHandlers/commands/services, streamingService signatures, async file inlining, AppSettings provider/tools structure, and test suites.
  • Implement Microsoft Foundry Agent Support and Fix Bridge Concurrency #16: Both PRs extend src/commandHandlers.ts (handleSettingCommand signatures), wire new commands into dispatch, and modify the CLI command surface and settings flows.
  • Handoff #4: Both modify Agent Framework/bridge paths (bridge/run.sh, bridge/README, src/api.ts, storage.ts) and AF workflow-related modules, indicating connected architectural changes.

Poem

🐰 A tool-executing rabbit hops with glee,
LiteLLM models dance in harmony,
Permissions asked, file contents inlined,
Provider defaults cleverly aligned!
Tests and settings bloom so bright,
This CLI upgrade shines so right! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 19.72% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: LiteLLM integration, tool system, and TUI UX improvements' accurately summarizes the main changes and aligns with the changeset's primary objectives.
Description check ✅ Passed The PR description is comprehensive, covering all major changes, testing details, and checklist items from the template. It follows the required structure with Description, Type of Change, Changes Made, Testing, Checklist, and Additional Notes sections.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch Zochory/moscow-v1

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @Zochory, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the application's capabilities by integrating LiteLLM for more versatile LLM provider management and introducing a powerful, permission-controlled tool execution system for coding agents. These core enhancements are complemented by numerous user experience improvements across the TUI, making the interface more intuitive and informative, and laying the groundwork for more advanced agentic workflows.

Highlights

  • LiteLLM Integration: The application now defaults to LiteLLM as the primary LLM provider, offering flexible model identification (e.g., openai/gpt-4o-mini) and auto-detection from environment variables. Existing OpenAI/Azure configurations remain backward compatible.
  • Comprehensive Tool System: A new tool execution system has been implemented for coding agents, including read_file, list_dir, write_file, and run_command capabilities. This system features robust permission controls (allow, ask, deny) for each tool, with automatic approval for safe operations and prompts for potentially risky ones like write_file and run_command.
  • Enhanced TUI User Experience: Significant improvements have been made to the Terminal User Interface (TUI), including a more interactive settings menu with select prompts for value selection, better suggestion list display with fuzzy matching and action hints, and updated status line information to reflect current provider, model, and tool status.
  • Asynchronous @file Mentions: The @file mention now asynchronously inlines the content of the referenced file directly into the prompt, providing the model with real-time context. This feature includes safeguards for file size and prevents reading from external directories without explicit permission.
  • Modular LLM and Tool Configuration: New dedicated modules (src/llm/ and src/tools/) have been introduced to centralize LLM provider resolution, model parsing, default settings, tool registry, execution logic, and permission management, improving maintainability and extensibility.
  • Dependency Updates: Key dependencies across both JavaScript/TypeScript and Python components have been updated, including @opentui/core, react, fastapi, uvicorn, and various Azure-related packages, ensuring compatibility and leveraging the latest features and fixes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an impressive pull request that introduces significant new functionality and improves the user experience. The integration of LiteLLM as the default provider is a great move towards flexibility. The new tool execution system is comprehensive and well-designed, with important security considerations like permission controls and path restrictions. The TUI has been greatly enhanced with a much-improved settings menu, better suggestion lists, and more informative status updates. The code is well-refactored into new llm and tools modules, and the addition of tests for the new logic is commendable. I have one minor suggestion regarding some unreachable code.

Comment on lines 459 to 460
const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code appears to be unreachable. The function handleSettingCommand will always return on line 453 or 457 when setPrompt is available, which seems to be the case for all its call sites. This leftover code from a previous implementation can be safely removed to improve clarity.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces LiteLLM as the default LLM provider, implements a comprehensive tool execution system for coding agents, and delivers significant UX improvements across the terminal UI. The changes enable qlaw-cli to work with multiple model providers through a unified interface, execute file system and shell operations with permission controls, and provide a more intuitive user experience.

Key Changes:

  • LiteLLM Integration: New provider resolution system with auto-detection, LiteLLM-style model identifiers (e.g., openai/gpt-4o-mini), and environment variable configuration
  • Tool System: Complete implementation of read_file, list_dir, write_file, and run_command tools with permission policies (allow/ask/deny) and safety checks for external directories and doom loops
  • Enhanced UX: Improved settings menu with select prompts, better suggestion list with match highlighting, async @file mention handler with content inlining, and updated status line showing provider/tools status

Reviewed changes

Copilot reviewed 55 out of 56 changed files in this pull request and generated 21 comments.

Show a summary per file
File Description
src/llm/config.ts New module for LLM provider resolution, authentication headers, and endpoint URL handling
src/llm/models.ts Model list parsing supporting JSON arrays and comma/newline-separated formats
src/llm/providerDefaults.ts Provider-specific default value application when switching providers
src/llm/input.ts Extracted prompt building logic for the Responses API
src/tools/index.ts Tool execution engine with file/directory/command operations and safety limits
src/tools/permissions.ts Permission resolution system with external directory and doom loop detection
src/tools/prompt.ts System prompt generation for tool-aware conversations
src/mentionHandlers.ts Async @file mention handler that reads and inlines file contents
src/index.tsx Tool queue management, execution loop, and integration with streaming responses
src/services/streamingService.ts Updated to use new LLM config resolver and pass settings through
src/hooks/useStreaming.ts Added onComplete callback for tool detection after streaming
src/hooks/useSettings.ts Enhanced with provider/tools settings and select-style prompts
src/hooks/useKeyboardShortcuts.ts Added left/right navigation for settings sections
src/components/SuggestionList.tsx Match highlighting, action hints, and expanded detail panel
src/components/StatusLine.tsx Shows provider, model, and tools status
src/components/SettingsMenu.tsx Tabbed sections with detail panel and improved navigation
src/components/PromptOverlay.tsx Added select prompt type with option list
src/components/MessageList.tsx Multi-segment rendering for mixed text and code blocks
src/components/InputArea.tsx Added isFocused prop for better focus management
src/commands.ts Added /provider and /tools commands with metadata
src/commandHandlers.ts Select-based value pickers, provider switching, and tool permission management
tests/*.test.ts Comprehensive test coverage for tools, permissions, mentions, models, and LLM config
docs/API-INTEGRATION.md LiteLLM proxy documentation and configuration examples
docs/ARCHITECTURE.md Architecture overview including new LLM and tools modules
README.md Updated with LiteLLM examples, provider commands, and tool documentation
AGENTS.md Tool system documentation and usage examples
package.json Dependency updates (@OpenTui 0.1.63, react 19.2.3, @types/bun 1.3.5)
bridge/requirements.txt Updated Python dependencies to latest versions
bridge/run.sh LiteLLM environment variable fallback support
.env.example Added LiteLLM configuration variables

Comment on lines 458 to 460

const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handleSettingCommand function has unreachable code. Lines 459-460 define a message and return statement that can never be executed because all code paths before this point either return early or open a prompt. This dead code should be removed to improve code maintainability.

Suggested change
const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };

Copilot uses AI. Check for mistakes.
Comment on lines 14 to 20
export function isExternalPath(path: string, cwd: string): boolean {
const resolved = resolve(cwd, path);
const root = resolve(cwd);
const rel = relative(root, resolved);
if (!rel) return false;
return rel.startsWith("..") || isAbsolute(rel);
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isExternalPath function has inconsistent behavior. When relative(root, resolved) returns an empty string (meaning the paths are identical), the function returns false. However, the subsequent check rel.startsWith("..") could be true for legitimate relative paths like "../sibling", but then isAbsolute(rel) is checked which would be false for such paths. The logic seems correct but could be clearer. Consider adding a comment explaining the empty string case, or restructuring to: return !rel || rel.startsWith("..") || isAbsolute(rel);

Copilot uses AI. Check for mistakes.
Comment on lines 72 to 76
const lower = label.toLowerCase();
const q = match.toLowerCase();
const idx = lower.indexOf(q);
if (idx === -1) return [label, "", ""];
return [label.slice(0, idx), label.slice(idx, idx + match.length), label.slice(idx + match.length)];
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The splitMatch function performs case-insensitive matching by converting both strings to lowercase, but then slices the original label using indices from the lowercase version. This works fine for ASCII but could produce incorrect results for Unicode characters where the lowercase version has a different byte length than the original (e.g., Turkish İ/i). Consider using String.prototype.toLowerCase() consistently or using a case-insensitive search that preserves original indices.

Suggested change
const lower = label.toLowerCase();
const q = match.toLowerCase();
const idx = lower.indexOf(q);
if (idx === -1) return [label, "", ""];
return [label.slice(0, idx), label.slice(idx, idx + match.length), label.slice(idx + match.length)];
const q = match.toLowerCase();
const labelLen = label.length;
const matchLen = match.length;
let idx = -1;
for (let i = 0; i <= labelLen - matchLen; i++) {
const segment = label.slice(i, i + matchLen);
if (segment.toLowerCase() === q) {
idx = i;
break;
}
}
if (idx === -1) return [label, "", ""];
return [
label.slice(0, idx),
label.slice(idx, idx + matchLen),
label.slice(idx + matchLen),
];

Copilot uses AI. Check for mistakes.
Comment on lines 499 to 506
applyValue: (value, ctx) => {
if (value === AUTO_SELECT_VALUE) {
ctx.setSettings((prev) => ({ ...prev, provider: undefined }));
return;
}
ctx.setSettings((prev) => applyProviderDefaults(prev, value));
},
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AUTO_SELECT_VALUE constant is defined but the auto-detection logic isn't fully implemented. When the user selects "Auto (detect)", the provider is set to undefined (line 501), but the resolveLlmConfig function's inference logic (line 50-58) will default to "litellm" rather than truly detecting based on endpoint patterns. This creates inconsistent behavior between "Auto" and explicitly selecting "litellm". Either fix the inference logic to properly detect, or update the UI description to clarify that "Auto" defaults to LiteLLM.

Copilot uses AI. Check for mistakes.
Comment on lines 182 to 194
function guessLanguage(path: string): string {
const ext = extname(path).toLowerCase().replace(".", "");
if (!ext) return "text";
if (ext === "ts") return "typescript";
if (ext === "tsx") return "tsx";
if (ext === "js") return "javascript";
if (ext === "jsx") return "jsx";
if (ext === "py") return "python";
if (ext === "json") return "json";
if (ext === "md") return "markdown";
if (ext === "sh") return "bash";
return ext;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The guessLanguage function doesn't handle multi-part extensions (e.g., .test.ts, .config.js). It will return "ts" for both .ts and .test.ts, which is correct, but the function doesn't handle cases like .d.ts which might benefit from specific handling. Consider adding support for common multi-part extensions or documenting this limitation.

Copilot uses AI. Check for mistakes.
Comment on lines 352 to 461
function handleSettingCommand(
settingKey: "model" | "endpoint" | "apiKey" | "afBridgeBaseUrl" | "afModel" | "foundryEndpoint",
settingKey: "model" | "provider" | "endpoint" | "apiKey" | "afBridgeBaseUrl" | "afModel" | "foundryEndpoint",
promptMessage: string,
confirmationFormatter: (value: string) => string,
args: string | undefined,
context: CommandContext,
shouldMaskValue: boolean = false
options?: {
shouldMaskValue?: boolean;
placeholder?: string;
applyValue?: (value: string, context: CommandContext) => void;
selectOptions?: PromptSelectOption[];
}
): CommandResult {
const { settings, setSettings } = context;
const { settings, setSettings, setMessages, setPrompt } = context;
const val = args?.trim();

const applyValue =
options?.applyValue ||
((value: string, ctx: CommandContext) => {
ctx.setSettings((prev) => ({ ...prev, [settingKey]: value }));
});

if (val) {
setSettings((prev) => ({ ...prev, [settingKey]: val }));
applyValue(val, context);
const systemMsg: Message = {
id: generateUniqueId(),
role: "system",
content: confirmationFormatter(val),
timestamp: new Date(),
};
return { systemMessage: systemMsg };
} else {
const currentValue = settings[settingKey] || "Not set";
const masked = shouldMaskValue && typeof currentValue === "string"
? (currentValue ? "***" + currentValue.slice(-4) : "Not set")
: currentValue;
const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };
}

const storedValue = settings[settingKey];
const displayValue = storedValue || "Not set";
const masked =
options?.shouldMaskValue && typeof displayValue === "string"
? displayValue
? "***" + displayValue.slice(-4)
: "Not set"
: displayValue;

const openInputPrompt = () => {
if (!setPrompt) return;
const defaultValue =
options?.shouldMaskValue || typeof storedValue !== "string"
? ""
: storedValue;
setPrompt({
type: "input",
message: promptMessage,
defaultValue,
placeholder: options?.placeholder || "Type and press Enter",
onConfirm: (value: string) => {
const trimmed = value.trim();
if (trimmed) {
applyValue(trimmed, context);
if (setMessages) {
const systemMsg: Message = {
id: generateUniqueId(),
role: "system",
content: confirmationFormatter(trimmed),
timestamp: new Date(),
};
setMessages((prev) => [...prev, systemMsg]);
}
}
setPrompt(null);
},
onCancel: () => setPrompt(null),
});
};

if (setPrompt && options?.selectOptions && options.selectOptions.length > 0) {
const selectedIndex = selectIndexForValue(options.selectOptions, storedValue);
setPrompt({
type: "select",
message: promptMessage,
options: options.selectOptions,
selectedIndex,
onSelect: (option) => {
if (option.value === CUSTOM_SELECT_VALUE) {
openInputPrompt();
return;
}
const resolvedValue = option.value ?? option.name;
if (resolvedValue) {
applyValue(resolvedValue, context);
if (setMessages) {
const systemMsg: Message = {
id: generateUniqueId(),
role: "system",
content: confirmationFormatter(resolvedValue),
timestamp: new Date(),
};
setMessages((prev) => [...prev, systemMsg]);
}
}
setPrompt(null);
},
onCancel: () => setPrompt(null),
});
return { shouldReturn: true };
}

openInputPrompt();
return { shouldReturn: true };

const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The command handler pattern with select options creates deeply nested promise chains and closures. The handleSettingCommand function now has multiple levels of callbacks (openInputPrompt, setPrompt with onSelect, applyValue) which makes the control flow hard to follow and could lead to state management issues. Consider refactoring this into a more linear state machine or extracting the prompt handling into a separate hook that manages the prompt lifecycle.

Copilot uses AI. Check for mistakes.
Comment on lines 286 to 291
useEffect(() => {
if (!toolQueue && toolQueueIndex > 0 && !prompt) {
setToolQueueIndex(0);
continueWithToolResults();
}
}, [toolQueue, toolQueueIndex, prompt, continueWithToolResults]);
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The continueWithToolResults callback is included in the useEffect dependency array at line 291, but this callback itself depends on many other values (settings, mode, isProcessing, etc.) and is recreated on every render due to the dependencies in its useCallback. This could cause unnecessary effect re-runs. Consider either: (1) memoizing the callback with stable dependencies, (2) restructuring the logic to avoid this circular dependency, or (3) using a ref to store the callback if it doesn't need to trigger re-renders.

Copilot uses AI. Check for mistakes.

function splitIntoSegments(content: string): Segment[] {
const segments: Segment[] = [];
const regex = /```(\w+)?(?:\n|\s)([\s\S]*?)```/g;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code block regex pattern /```(\w+)?(?:\n|\s)([\s\S]*?)```/g requires either a newline or space after the language identifier, which means code blocks formatted as ```json{"foo":"bar"}``` (without whitespace) won't be matched. While this might be intentional to enforce proper formatting, it could lead to unexpected behavior where valid markdown code blocks aren't rendered as code. Consider making the whitespace optional: /```(\w+)?\s*([\s\S]*?)```/g

Suggested change
const regex = /```(\w+)?(?:\n|\s)([\s\S]*?)```/g;
const regex = /```(\w+)?\s*([\s\S]*?)```/g;

Copilot uses AI. Check for mistakes.
}
): CommandResult {
const { settings, setSettings } = context;
const { settings, setSettings, setMessages, setPrompt } = context;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused variable setSettings.

Suggested change
const { settings, setSettings, setMessages, setPrompt } = context;
const { settings, setMessages, setPrompt } = context;

Copilot uses AI. Check for mistakes.
Comment on lines 386 to 391
const masked =
options?.shouldMaskValue && typeof displayValue === "string"
? displayValue
? "***" + displayValue.slice(-4)
: "Not set"
: displayValue;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial value of masked is unused, since it is always overwritten.

Copilot uses AI. Check for mistakes.
Resolved conflicts in src/commandHandlers.ts by accepting incoming
changes from main branch (PR #17).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@Zochory Zochory merged commit 515a722 into main Dec 28, 2025
8 checks passed
@Zochory Zochory deleted the Zochory/moscow-v1 branch December 28, 2025 11:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants