Skip to content

Conversation

@Zochory
Copy link
Member

@Zochory Zochory commented Dec 28, 2025

Description

Adds LiteLLM provider defaults and settings logic, introduces a permissioned tool loop with @file inlining, and improves command menu interactivity. Updates bridge runbook/docs and bumps JS/Python dependencies.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Code refactoring
  • Performance improvement
  • Test addition or update
  • Dependency update

Related Issues

Fixes #
Related to #

Changes Made

  • Add LiteLLM/provider default handling and OpenAI-compatible config resolution
  • Add tool loop with permissions + @file inlining, and enhance command menu UX
  • Update docs/AGENTS/runbooks and bump dependencies

Screenshots (if applicable)

Testing

Test Configuration

  • OS: macOS 26.3
  • Terminal: Conductor CLI
  • Bun Version: 1.2.10

Test Cases

  • Tested basic chat functionality
  • Tested command system
  • Tested mention system
  • Tested keyboard shortcuts
  • Tested with real API endpoint
  • Tested error handling
  • Ran type checking (bun run typecheck)
  • Tested in multiple terminal emulators
  • Other: _______________

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings or errors
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published
  • I have checked my code and corrected any misspellings
  • I have updated the CHANGELOG.md file (if applicable)

Breaking Changes

Additional Notes

Reviewer Checklist

  • Code quality is acceptable
  • Tests are adequate
  • Documentation is updated
  • No security concerns
  • Performance impact is acceptable
  • Breaking changes are justified and documented

Copilot AI review requested due to automatic review settings December 28, 2025 03:00
@coderabbitai
Copy link

coderabbitai bot commented Dec 28, 2025

Caution

Review failed

The pull request is closed.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • LiteLLM provider support and provider defaulting/switching
    • Coding Agent Tools: opt-in tool execution (read/list/write/run) with per-tool permissions, auto-approve, and tool execution flow
    • New /provider and /tools commands; select-style prompts and richer prompt UI (selects, tool prompts)
    • Async @file mentions that inline file contents with size/permission handling
  • Documentation

    • Expanded docs and plans for LiteLLM, provider behavior, and Coding Agent tooling
  • Tests

    • Added tests for model parsing, LLM config, mentions, provider defaults, tools, and permissions

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

Adds LiteLLM provider support and centralized LLM configuration; implements a permissioned coding-agent tool framework (parse/execute/permissions/prompt); surfaces provider and tools controls in CLI/UI and command handlers; wires tool execution into submit/stream flows; updates docs, defaults, deps, and tests.

Changes

Cohort / File(s) Summary
Env & Bridge runtime
\.env.example, bridge/run.sh
Add LITELLM_* env vars and LLM_PROVIDER; run.sh maps LITELLM_*OPENAI_* when present and updates warning text.
Docs & Guides
AGENTS.md, PLANS.md, README.md, bridge/README.md, docs/...
Document LiteLLM proxy, provider handling, new /provider & /tools commands, Coding Agent tools, ExecPlans, and usage examples.
Storage & Types
src/storage.ts, src/types.ts
Add provider and structured tools to AppSettings; new types LlmProvider, PermissionMode; UISuggestion adds keywords/requiresValue; Prompt adds select variant.
LLM config & input helpers
src/llm/config.ts, src/llm/providerDefaults.ts, src/llm/models.ts, src/llm/input.ts
New resolver resolveLlmConfig, getAuthHeader, resolveResponsesUrl, applyProviderDefaults, parseModelList, and buildResponsesInput(history) to normalize provider/base/model (incl. LiteLLM heuristics).
API surface & services
src/api.ts, src/services/streamingService.ts, src/services/commandService.ts, src/sse.ts
Remove legacy OPENAI_* exports from src/api.ts; re-export buildResponsesInput; streaming service now resolves config via resolveLlmConfig(settings) and accepts settings; SSE delta extraction broadened; add command routing for provider & tools.
Tool framework
src/tools/index.ts, src/tools/permissions.ts, src/tools/prompt.ts
New parser parseToolCalls, executor executeToolCall (read_file/list_dir/write_file/run_command) with bounds/truncation; resolveToolPermission enforces external/doom-loop/auto-approve logic; buildToolSystemPrompt generates system prompt when enabled.
App integration & orchestration
src/index.tsx, src/mentionHandlers.ts
Introduce tool queue/state, system-prompt injection, parse-on-complete, per-call permission prompts and execution, append tool results into history; add async mention expansion formatMessageWithMentionsAsync.
Commands & handlers
src/commands.ts, src/commandHandlers.ts, src/services/commandService.ts
Add provider and tools commands, requiresValue flags, interactive provider selection with applyProviderDefaults, and new handlers handleProviderCommand / handleToolsCommand.
UI & Hooks
src/cliHelp.ts, src/components/*, src/hooks/*, src/components/PromptOverlay.tsx, src/components/InputArea.tsx, src/components/SettingsMenu.tsx
Surface Provider/Tools in status/help; MessageList segments messages; SuggestionList improves highlight/detail panel; PromptOverlay adds select support; InputArea gains isFocused; settings UI becomes section-aware; hooks accept new fields and streaming onComplete.
Tests
tests/*
New/updated tests for LLM config, mention expansion, provider defaults, model parsing, tool parsing/execution, permissions; streaming test now passes explicit settings.
Dependencies & bridge
package.json, bridge/requirements.txt, bridge/requirements-dev.txt
Bump front-end and bridge dependencies (React/@OpenTui), FastAPI/uvicorn, pydantic, pytest version; remove recursive -r in dev requirements.
Removed docs
.trae/documents/*
Delete multiple long-form planning/design documents under .trae/documents/.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant App as Client App
    participant Responses as Responses API
    participant Parser as Tool Parser
    participant Perms as Permission Resolver
    participant Executor as Tool Executor

    User->>App: Submit message
    App->>Responses: Stream request (resolveLlmConfig(settings) + optional tool-system prompt)
    Responses-->>App: Assistant response

    App->>Parser: parseToolCalls(response)
    Parser-->>App: [ToolCall...]

    loop per ToolCall
        App->>Perms: resolveToolPermission(call, settings, cwd, isDoomLoop)
        alt deny
            Perms-->>App: deny (reason)
            App->>User: Notify denial
        else ask
            App->>User: Prompt for approval
            User-->>App: Approve / Deny
            alt approved
                App->>Executor: executeToolCall(call, options)
            else denied
                App-->>App: Skip execution
            end
        else allow
            App->>Executor: executeToolCall(call, options)
        end
        Executor-->>App: ToolResult
        App->>App: Append formatted tool result to history
        App->>Responses: Continue/reprompt with tool results
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~65 minutes

Possibly related PRs

  • Add LiteLLM defaults and TUI tool UX #17 — Large overlap: LiteLLM provider, provider-defaults, tools parsing/execution, and related src/llm and src/tools files.
  • Dev #9 — Touches UI components and hooks (MessageList, PromptOverlay, SuggestionList) that overlap with visual and interaction changes here.
  • Handoff #4 — Overlaps bridge/runtime and AF-related docs/bridge changes (bridge/run.sh, bridge/README.md).

Poem

🐰 I nibble through code, byte by byte,

LiteLLM wakes and tools take flight,
I hop to check each permission gate,
I run, I read, then I update,
Tests pass — carrots feel just right.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description covers the main changes and type of change but is missing several template sections: no detailed Description paragraph, no Screenshots, incomplete Testing section with unchecked items, incomplete Checklist with several unchecked critical items (code comments, CHANGELOG), and no Related Issues specified. Expand the Description section with more detail, add Testing evidence or justification for unchecked items, complete the Checklist (especially CHANGELOG.md and comments), and add Related Issues if applicable.
Docstring Coverage ⚠️ Warning Docstring coverage is 19.72% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main changes: adding LiteLLM defaults and TUI tool UX improvements, which align with the primary objectives of the PR.

📜 Recent review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 406b6a3 and c19d8fa.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (55)
  • .env.example
  • .trae/documents/Enhance AF Workflows Web UI_ Dropdown, Themes, Agents, Messaging, Scrolling, Tests, Docs.md
  • .trae/documents/Enhance TUI CLI with Workflow Mode and AF Backend.md
  • .trae/documents/Integrate Handoff Workflows into QLAW TUI.md
  • .trae/documents/Progressive Enhancements for Workflows Web UI.md
  • .trae/documents/Publishable qlaw-cli v0.1.5 (Bun-based CLI).md
  • .trae/documents/QLAW CLI Analysis, Cleanup, Docs, Version 0.1.5, QA.md
  • .trae/documents/QLAW CLI Codebase Analysis + Proposed Improvements.md
  • .trae/documents/QLAW CLI Experience Enhancements.md
  • AGENTS.md
  • PLANS.md
  • README.md
  • bridge/README.md
  • bridge/requirements-dev.txt
  • bridge/requirements.txt
  • bridge/run.sh
  • docs/API-INTEGRATION.md
  • docs/ARCHITECTURE.md
  • package.json
  • src/api.ts
  • src/cliHelp.ts
  • src/commandHandlers.ts
  • src/commands.ts
  • src/components/InputArea.tsx
  • src/components/MessageList.tsx
  • src/components/PromptOverlay.tsx
  • src/components/SettingsMenu.tsx
  • src/components/StatusLine.tsx
  • src/components/SuggestionList.tsx
  • src/hooks/useInputMode.ts
  • src/hooks/useKeyboardShortcuts.ts
  • src/hooks/useSettings.ts
  • src/hooks/useStreaming.ts
  • src/index.tsx
  • src/llm/config.ts
  • src/llm/input.ts
  • src/llm/models.ts
  • src/llm/providerDefaults.ts
  • src/mentionHandlers.ts
  • src/services/commandService.ts
  • src/services/streamingService.ts
  • src/sse.ts
  • src/storage.ts
  • src/suggest.ts
  • src/tools/index.ts
  • src/tools/permissions.ts
  • src/tools/prompt.ts
  • src/types.ts
  • tests/llmConfig.test.ts
  • tests/mentionHandlers.test.ts
  • tests/models.test.ts
  • tests/providerDefaults.test.ts
  • tests/streamingService.test.ts
  • tests/tools.test.ts
  • tests/toolsPermissions.test.ts

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @Zochory, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly modernizes the CLI by integrating LiteLLM for flexible model provider support and introducing powerful coding agent capabilities. It allows the AI to interact with the local file system and execute commands in a controlled, permissioned environment, greatly expanding its utility for development tasks. Concurrently, the user experience for interacting with commands and mentions has been refined, and underlying dependencies have been brought up to date.

Highlights

  • LiteLLM Integration: Added comprehensive support for LiteLLM, allowing the CLI to use LiteLLM-compatible proxies and model identifiers. This includes new environment variables, configuration settings, and automatic provider detection.
  • Coding Agent Tools: Introduced a permissioned tool execution loop, enabling the AI to request and execute file system (read, list, write) and shell commands. Tools can be enabled/disabled, auto-approved for safe operations, and permissions can be fine-tuned via new commands and settings.
  • Enhanced UX for Mentions and Commands: The @file mention now inlines file contents directly into the prompt (with truncation for safety). The command menu and suggestion list have been improved with better interactivity, query matching, and detailed information for selected commands/mentions.
  • Dependency Updates: Key JavaScript (Bun, OpenTUI, React) and Python (FastAPI, Uvicorn, Azure SDKs) dependencies have been updated to their latest versions, ensuring stability and access to new features.
  • Documentation Updates: The AGENTS.md, README.md, and docs/API-INTEGRATION.md files have been updated to reflect the new LiteLLM configuration options, coding agent tools, and enhanced mention functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality, including support for LiteLLM providers and a comprehensive, permissioned tool execution loop for the coding agent. The changes are well-structured, with new logic correctly organized into llm and tools directories. The implementation of the tool loop, with its security considerations like permission levels and doom-loop detection, is particularly well-executed. The documentation has also been thoroughly updated to reflect these new features. My review includes a few suggestions to simplify some of the new logic and improve maintainability, particularly around environment variable handling and the structure of a large new command handler.

Comment on lines +29 to 40
if [[ -z "${OPENAI_BASE_URL:-}" || -z "${OPENAI_API_KEY:-}" || -z "${OPENAI_MODEL:-}" ]]; then
if [[ -n "${LITELLM_BASE_URL:-}" || -n "${LITELLM_API_KEY:-}" || -n "${LITELLM_MODEL:-}" ]]; then
export OPENAI_BASE_URL="${OPENAI_BASE_URL:-${LITELLM_BASE_URL:-}}"
export OPENAI_API_KEY="${OPENAI_API_KEY:-${LITELLM_API_KEY:-}}"
export OPENAI_MODEL="${OPENAI_MODEL:-${LITELLM_MODEL:-}}"
fi
fi

if [[ -z "${OPENAI_BASE_URL:-}" || -z "${OPENAI_API_KEY:-}" || -z "${OPENAI_MODEL:-}" ]]; then
echo "[warn] OPENAI_BASE_URL / OPENAI_API_KEY / OPENAI_MODEL not set."
echo " The bridge uses OpenAIChatClient by default. Set these to a real backend."
echo " The bridge uses OpenAIChatClient by default. Set these or LITELLM_* vars."
fi

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for setting fallback environment variables and issuing a warning can be simplified to avoid redundant checks. The current implementation checks for missing OPENAI_* variables twice. You can combine these into a single, clearer block. The inner check for LITELLM_* variables is also redundant, as the parameter expansion ${VAR:-${FALLBACK:-}} safely handles cases where the fallback is empty.

Suggested change
if [[ -z "${OPENAI_BASE_URL:-}" || -z "${OPENAI_API_KEY:-}" || -z "${OPENAI_MODEL:-}" ]]; then
if [[ -n "${LITELLM_BASE_URL:-}" || -n "${LITELLM_API_KEY:-}" || -n "${LITELLM_MODEL:-}" ]]; then
export OPENAI_BASE_URL="${OPENAI_BASE_URL:-${LITELLM_BASE_URL:-}}"
export OPENAI_API_KEY="${OPENAI_API_KEY:-${LITELLM_API_KEY:-}}"
export OPENAI_MODEL="${OPENAI_MODEL:-${LITELLM_MODEL:-}}"
fi
fi
if [[ -z "${OPENAI_BASE_URL:-}" || -z "${OPENAI_API_KEY:-}" || -z "${OPENAI_MODEL:-}" ]]; then
echo "[warn] OPENAI_BASE_URL / OPENAI_API_KEY / OPENAI_MODEL not set."
echo " The bridge uses OpenAIChatClient by default. Set these to a real backend."
echo " The bridge uses OpenAIChatClient by default. Set these or LITELLM_* vars."
fi
# If core OpenAI vars are not fully set, try to populate them from LiteLLM vars.
if [[ -z "${OPENAI_BASE_URL:-}" || -z "${OPENAI_API_KEY:-}" || -z "${OPENAI_MODEL:-}" ]]; then
export OPENAI_BASE_URL="${OPENAI_BASE_URL:-${LITELLM_BASE_URL:-}}"
export OPENAI_API_KEY="${OPENAI_API_KEY:-${LITELLM_API_KEY:-}}"
export OPENAI_MODEL="${OPENAI_MODEL:-${LITELLM_MODEL:-}}"
# After attempting the fallback, if vars are still missing, issue a warning.
if [[ -z "${OPENAI_BASE_URL:-}" || -z "${OPENAI_API_KEY:-}" || -z "${OPENAI_MODEL:-}" ]]; then
echo "[warn] OPENAI_BASE_URL / OPENAI_API_KEY / OPENAI_MODEL not set."
echo " The bridge uses OpenAIChatClient by default. Set these or LITELLM_* vars."
fi
fi

Comment on lines 321 to 481
export function handleToolsCommand(
args: string | undefined,
context: CommandContext
): CommandResult {
const { settings, setSettings } = context;
const trimmed = (args || "").trim().toLowerCase();

if (!trimmed) {
const status = settings.tools?.enabled ? "Enabled" : "Disabled";
const auto = settings.tools?.autoApprove ? "Enabled" : "Disabled";
const perms = settings.tools?.permissions || {};
const permSummary = [
`read_file=${perms.read_file || "allow"}`,
`list_dir=${perms.list_dir || "allow"}`,
`write_file=${perms.write_file || "ask"}`,
`run_command=${perms.run_command || "ask"}`,
`external_directory=${perms.external_directory || "ask"}`,
`doom_loop=${perms.doom_loop || "ask"}`,
].join(" ");
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Tools: ${status}\nAuto-approve: ${auto}\nPermissions: ${permSummary}\nUsage: /tools on|off|auto [on|off]\n /tools perm <tool> <allow|ask|deny>`,
timestamp: new Date(),
},
};
}

const parts = trimmed.split(/\s+/);
const sub = parts[0] || "";

if (sub === "perm" || sub === "permission") {
const rawTool = parts[1];
const rawMode = parts[2];
if (!rawTool || !rawMode) {
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Usage: /tools perm <tool> <allow|ask|deny>",
timestamp: new Date(),
},
};
}
const toolMap: Record<string, string> = {
read: "read_file",
read_file: "read_file",
list: "list_dir",
list_dir: "list_dir",
write: "write_file",
write_file: "write_file",
run: "run_command",
run_command: "run_command",
external: "external_directory",
external_directory: "external_directory",
doom: "doom_loop",
doom_loop: "doom_loop",
};
const toolKey = toolMap[rawTool];
if (!toolKey) {
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Unknown tool: ${rawTool}`,
timestamp: new Date(),
},
};
}
if (!["allow", "ask", "deny"].includes(rawMode)) {
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Mode must be allow, ask, or deny.",
timestamp: new Date(),
},
};
}
setSettings((prev) => ({
...prev,
tools: {
...(prev.tools || {}),
permissions: {
...(prev.tools?.permissions || {}),
[toolKey]: rawMode,
},
},
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Permission set: ${toolKey}=${rawMode}`,
timestamp: new Date(),
},
};
}

if (sub === "on" || sub === "enable" || sub === "enabled") {
setSettings((prev) => ({
...prev,
tools: { ...(prev.tools || {}), enabled: true },
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Tools enabled.",
timestamp: new Date(),
},
};
}

if (sub === "off" || sub === "disable" || sub === "disabled") {
setSettings((prev) => ({
...prev,
tools: { ...(prev.tools || {}), enabled: false },
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Tools disabled.",
timestamp: new Date(),
},
};
}

if (sub === "auto") {
const desired = parts[1] || "toggle";
const nextValue =
desired === "on"
? true
: desired === "off"
? false
: !(settings.tools?.autoApprove ?? false);
setSettings((prev) => ({
...prev,
tools: { ...(prev.tools || {}), autoApprove: nextValue },
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Auto-approve ${nextValue ? "enabled" : "disabled"}.`,
timestamp: new Date(),
},
};
}

return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Usage: /tools on|off|auto [on|off]",
timestamp: new Date(),
},
};
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The handleToolsCommand function is quite long and handles multiple subcommands (on, off, auto, perm). To improve readability and maintainability, consider refactoring this into smaller, more focused functions for each subcommand (e.g., handleToolsPermissionCommand, handleToolsToggleCommand).

Additionally, the toolMap (lines 366-379) is defined inside the function. Since it's a constant mapping, it would be better to define it as a const at the module level. This avoids re-creating the object on every function call.

Comment on lines 58 to 62
} else {
baseUrl = settings?.endpoint || env.openaiBaseUrl || env.litellmBaseUrl;
apiKey = settings?.apiKey || env.openaiApiKey || env.litellmApiKey;
model = settings?.model || env.openaiModel || env.litellmModel;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The fallback logic for the custom provider could lead to unexpected behavior. When provider is 'custom', the code falls back to OPENAI_* and LITELLM_* environment variables if settings are not provided. A user selecting a 'custom' provider likely intends to provide all connection details manually via settings. Mixing a custom endpoint from settings with an API key or model from a different provider's environment variables could cause connection failures or unexpected costs.

Consider making the custom provider rely only on values from settings and not fall back to environment variables to make its behavior more predictable.

Comment on lines 16 to 42
function splitIntoSegments(content: string): Segment[] {
const segments: Segment[] = [];
const regex = /```(\\w+)?\\n([\\s\\S]*?)```/g;
let lastIndex = 0;
let match: RegExpExecArray | null;

while ((match = regex.exec(content)) !== null) {
if (match.index > lastIndex) {
const text = content.slice(lastIndex, match.index);
if (text.trim()) segments.push({ type: "text", content: text });
}
const lang = match[1] ? match[1] : "plaintext";
segments.push({ type: "code", content: match[2] || "", lang });
lastIndex = match.index + match[0].length;
}

if (lastIndex < content.length) {
const text = content.slice(lastIndex);
if (text.trim()) segments.push({ type: "text", content: text });
}

if (segments.length === 0) {
segments.push({ type: "text", content });
}

return segments;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The splitIntoSegments function is a great addition for handling multiple code blocks. However, the regular expression /```(\\w+)?\\n([\\s\\S]*?)```/g might not correctly handle code blocks that don't have a newline immediately after the language identifier (e.g., ```python). While less common, it's a possible edge case. A more robust regex could be /```(\\w+)?(?:\\n|\\s)([\\s\\S]*?)```/g to allow for optional whitespace.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR modernizes qlaw-cli with LiteLLM provider support, introduces a permissioned coding agent tool system with file inlining, and refreshes dependencies. It adds significant new functionality including tool execution with configurable permissions, enhanced command menu UX with highlighted matching, and multi-provider configuration logic.

Key changes include:

  • LiteLLM provider defaults and OpenAI-compatible config resolution with auto-detection
  • Tool loop implementation with permission system (allow/ask/deny) for file operations and shell commands
  • Enhanced @file mentions that inline actual file contents instead of just referencing them

Reviewed changes

Copilot reviewed 35 out of 36 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
src/types.ts Adds LlmProvider and PermissionMode types, extends AppSettings with provider and tools configuration
src/tools/index.ts New tool execution system with parsing, validation, and result formatting for read/list/write/run operations
src/tools/permissions.ts Permission resolution logic with external directory and doom-loop protection
src/tools/prompt.ts System prompt generation for tool-enabled conversations
src/llm/config.ts Provider resolution and auth header logic supporting OpenAI, Azure, LiteLLM, and custom backends
src/llm/providerDefaults.ts Provider-specific default handling and env variable fallback chains
src/llm/input.ts Extracted prompt building logic for Responses API compatibility
src/services/streamingService.ts Updated to use new config resolution and accept settings parameter
src/services/commandService.ts Registers new /provider and /tools commands
src/index.tsx Implements tool queue loop, async @file handling, and permission prompts in main app logic
src/hooks/useStreaming.ts Adds onComplete callback for tool detection after streaming
src/hooks/useSettings.ts Adds Coding Agent section with tools and auto-approve toggles, provider field
src/hooks/useInputMode.ts Passes keywords field to UI suggestions
src/components/SuggestionList.tsx Enhanced with query highlighting, action hints, and expanded detail view
src/components/StatusLine.tsx Shows provider, model, and tools status in footer
src/components/MessageList.tsx Implements multi-code-block rendering with segment parsing
src/mentionHandlers.ts Adds async file reading and inlining for @file mentions with truncation
src/storage.ts Adds LiteLLM env vars and tools configuration to defaults
src/sse.ts Additional delta extraction patterns for broader compatibility
src/commandHandlers.ts Implements /provider and /tools commands with permission management
src/commands.ts Registers provider and tools in command metadata
src/cliHelp.ts Updates status display to include provider and tools
src/api.ts Simplified to re-export buildResponsesInput from llm/input
tests/streamingService.test.ts Updated test to include settings parameter and new env vars
docs/ARCHITECTURE.md Documents new llm/ and tools/ modules
docs/API-INTEGRATION.md Adds LiteLLM proxy configuration section
docs/AGENTS.md Documents coding agent tools and permissions
README.md Adds LiteLLM and tools documentation
PLANS.md New execution plan document
bridge/run.sh Adds LITELLM_* to OPENAI_* fallback logic
bridge/requirements.txt Updates Python dependency versions
bridge/requirements-dev.txt Updates pytest version
bridge/README.md Documents LiteLLM configuration
package.json Bumps @OpenTui and React dependencies
bun.lock Reflects updated dependency versions
.env.example Adds LiteLLM environment variable examples
Comments suppressed due to low confidence (1)

src/index.tsx:442

  • The handleSubmit callback is declared as async but the command execution path (lines 434-443) calls handleExecuteCommand without awaiting it. This could lead to race conditions where the input is cleared before the command completes, or errors from command execution might not be properly caught. Either await the call or ensure proper error handling for the fire-and-forget pattern.
    if (input.startsWith("/")) {
      const parts = input.slice(1).split(" ");
      const cmd = parts[0];
      const args = parts.slice(1).join(" ");
      if (cmd) {
        handleExecuteCommand(cmd, args);
      }
      setInput("");
      return;


function splitIntoSegments(content: string): Segment[] {
const segments: Segment[] = [];
const regex = /```(\\w+)?\\n([\\s\\S]*?)```/g;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regex pattern has incorrectly escaped backslashes. In JavaScript regex, \\w should be \w, and \\n should be \n, and \\s should be \s. The double backslashes will match literal backslashes in the string rather than the intended character classes and escape sequences. This will prevent code blocks from being properly detected and parsed.

Suggested change
const regex = /```(\\w+)?\\n([\\s\\S]*?)```/g;
const regex = /```(\w+)?\n([\s\S]*?)```/g;

Copilot uses AI. Check for mistakes.
src/index.tsx Outdated
Comment on lines 248 to 250
setPrompt({
type: "confirm",
message: `Allow tool: ${call.tool}?`,
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tool permission prompt message is generic and doesn't show which file path or command is about to be executed. For security and UX reasons, users should see exactly what operation they're approving. The prompt should include details like the path being read/written or the command being executed, especially for sensitive operations.

Suggested change
setPrompt({
type: "confirm",
message: `Allow tool: ${call.tool}?`,
const argsObject = (call as ToolCall).args || {};
const hasArgs = argsObject && Object.keys(argsObject).length > 0;
let argsPreview = hasArgs ? JSON.stringify(argsObject, null, 2) : "no arguments";
const MAX_PREVIEW_LENGTH = 200;
if (argsPreview.length > MAX_PREVIEW_LENGTH) {
argsPreview = argsPreview.slice(0, MAX_PREVIEW_LENGTH) + "…";
}
const permissionMessage =
`Allow tool "${call.tool}" to run in:\n` +
` ${toolOptions.cwd}\n` +
`with arguments:\n` +
` ${argsPreview}\n\n` +
`Proceed?`;
setPrompt({
type: "confirm",
message: permissionMessage,

Copilot uses AI. Check for mistakes.
src/index.tsx Outdated
const historyForApi = [
...(toolSystemMessage ? [toolSystemMessage] : []),
...messagesRef.current.filter(
(m) => !(m.role === "system" && m.content.startsWith("Tool "))
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filter condition on line 172 checks if the content "startsWith('Tool ')" but the actual system messages created for tool results use the format "Tool {tool_name} (ok)" or "Tool {tool_name} (error)" from formatToolResult. This check will correctly filter these messages. However, the "Skipped tool" messages don't match this pattern and won't be filtered, which could lead to accumulating error messages in the conversation history.

Suggested change
(m) => !(m.role === "system" && m.content.startsWith("Tool "))
(m) =>
!(
m.role === "system" &&
(m.content.startsWith("Tool ") || m.content.startsWith("Skipped tool"))
)

Copilot uses AI. Check for mistakes.
README.md Outdated
Comment on lines 156 to 161

### Coding Agent Tools

When `/tools` is enabled, the assistant can request tool execution using fenced `tool` blocks.
Permissions follow `allow | ask | deny` and can be configured via `/tools perm`.
- `@code function example() { return true; }` will format as a code snippet
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation has a structural issue. Line 161 describing the @code mention is incorrectly placed after the "Coding Agent Tools" section header. This line should be moved up to be with the other mention examples (before line 156) to maintain proper section organization.

Suggested change
### Coding Agent Tools
When `/tools` is enabled, the assistant can request tool execution using fenced `tool` blocks.
Permissions follow `allow | ask | deny` and can be configured via `/tools perm`.
- `@code function example() { return true; }` will format as a code snippet
- `@code function example() { return true; }` will format as a code snippet
### Coding Agent Tools
When `/tools` is enabled, the assistant can request tool execution using fenced `tool` blocks.
Permissions follow `allow | ask | deny` and can be configured via `/tools perm`.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 63
import type { AppSettings, LlmProvider } from "../types.ts";

const DEFAULT_OPENAI_BASE_URL = "https://api.openai.com/v1";

function normalizeProvider(provider?: string): LlmProvider | undefined {
if (!provider) return undefined;
const normalized = provider.toLowerCase();
if (normalized === "openai" || normalized === "azure" || normalized === "litellm" || normalized === "custom") {
return normalized as LlmProvider;
}
return undefined;
}

function isOpenAIEndpoint(endpoint: string | undefined, envOpenaiBase: string | undefined): boolean {
if (!endpoint) return true;
if (envOpenaiBase && endpoint === envOpenaiBase) return true;
if (endpoint === DEFAULT_OPENAI_BASE_URL) return true;
return endpoint.includes("api.openai.com");
}

export function applyProviderDefaults(settings: AppSettings, provider?: string): AppSettings {
const normalized = normalizeProvider(provider);
if (!normalized) {
return { ...settings, provider };
}

const env = {
openaiBaseUrl: process.env.OPENAI_BASE_URL,
openaiApiKey: process.env.OPENAI_API_KEY,
openaiModel: process.env.OPENAI_MODEL,
litellmBaseUrl: process.env.LITELLM_BASE_URL,
litellmApiKey: process.env.LITELLM_API_KEY,
litellmModel: process.env.LITELLM_MODEL,
};

const next: AppSettings = { ...settings, provider: normalized };

if (normalized === "litellm") {
if (isOpenAIEndpoint(next.endpoint, env.openaiBaseUrl)) {
next.endpoint = env.litellmBaseUrl || undefined;
}
if (!next.model || (env.openaiModel && next.model === env.openaiModel)) {
next.model = env.litellmModel || undefined;
}
if (!next.apiKey || (env.openaiApiKey && next.apiKey === env.openaiApiKey)) {
next.apiKey = env.litellmApiKey || undefined;
}
}

if (normalized === "openai") {
if (!next.endpoint || next.endpoint === env.litellmBaseUrl) {
next.endpoint = env.openaiBaseUrl || DEFAULT_OPENAI_BASE_URL;
}
if (!next.model || (env.litellmModel && next.model === env.litellmModel)) {
next.model = env.openaiModel || next.model;
}
if (!next.apiKey || (env.litellmApiKey && next.apiKey === env.litellmApiKey)) {
next.apiKey = env.openaiApiKey || next.apiKey;
}
}

return next;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LLM configuration logic in src/llm/config.ts and src/llm/providerDefaults.ts lacks test coverage. These functions handle critical provider resolution and fallback logic, including inferProvider and applyProviderDefaults, which determine how different environment variables interact. Tests should verify the precedence rules and edge cases.

Copilot uses AI. Check for mistakes.
Comment on lines 14 to 18
export function isExternalPath(path: string, cwd: string): boolean {
const resolved = resolve(cwd, path);
const root = resolve(cwd) + ""; // ensure normalized
if (resolved === root) return false;
return !resolved.startsWith(root + "/");
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The external path detection logic has a flaw. On Windows systems, path separators use backslashes (), but this code only checks for forward slash (/). This means paths within the project on Windows might incorrectly be flagged as external. Use path.sep instead of hardcoding "/" to ensure cross-platform compatibility.

Copilot uses AI. Check for mistakes.
Comment on lines +114 to +119
case "run_command": {
const command = String(args.command || "").trim();
if (!command) throw new Error("Missing command");
const { stdout, stderr } = await execAsync(command, { cwd, maxBuffer: maxOutputChars * 2 });
const output = [stdout, stderr].filter(Boolean).join("\n");
return { tool: call.tool, ok: true, output: truncate(output || "(no output)", maxOutputChars) };
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_command tool executes arbitrary shell commands without any validation or sanitization. This is a security risk as it could allow command injection if user input flows into the command argument. Consider adding validation, a command allowlist, or at minimum, clear warnings in documentation about the security implications of enabling this tool.

Copilot uses AI. Check for mistakes.
Comment on lines 160 to 170
try {
const fullPath = resolve(options.cwd, path);
const file = await readFile(fullPath);
const limited = file.length > options.maxFileBytes ? file.subarray(0, options.maxFileBytes) : file;
const text = limited.toString("utf-8");
const suffix = file.length > options.maxFileBytes ? `\n…truncated (${file.length - options.maxFileBytes} bytes)` : "";
const lang = guessLanguage(path);
return `[File: ${path}]\n\`\`\`${lang}\n${text}${suffix}\n\`\`\``;
} catch (e: any) {
return `[File Reference: ${path}]\n\nUnable to read file (${e?.message || e}). Please verify the path.`;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @file mention handler reads files without validating that the path is within the project directory. This could allow reading arbitrary files on the system if a malicious or compromised model requests sensitive files. Consider applying the same external path validation used in the tools permission system (isExternalPath) to prevent reading files outside the working directory.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +139
import { readFile, writeFile, readdir, stat, mkdir } from "fs/promises";
import { resolve } from "path";
import { exec } from "child_process";
import { promisify } from "util";

const execAsync = promisify(exec);

export type ToolName = "read_file" | "list_dir" | "write_file" | "run_command";

export interface ToolCall {
tool: ToolName;
args?: Record<string, any>;
}

export interface ToolResult {
tool: ToolName;
ok: boolean;
output: string;
error?: string;
}

export interface ToolExecutionOptions {
cwd: string;
maxFileBytes: number;
maxDirEntries: number;
maxOutputChars: number;
}

const TOOL_BLOCK_REGEX = /```tool\s*([\s\S]*?)```/gi;

export function parseToolCalls(text: string): ToolCall[] {
const calls: ToolCall[] = [];
if (!text) return calls;

let match: RegExpExecArray | null;
while ((match = TOOL_BLOCK_REGEX.exec(text)) !== null) {
const payload = match[1]?.trim();
if (!payload) continue;
try {
const parsed = JSON.parse(payload);
if (Array.isArray(parsed)) {
parsed.forEach((item) => pushToolCall(item, calls));
} else if (parsed && typeof parsed === "object") {
if (Array.isArray((parsed as any).tools)) {
(parsed as any).tools.forEach((item: any) => pushToolCall(item, calls));
} else {
pushToolCall(parsed, calls);
}
}
} catch {
continue;
}
}
return calls;
}

function pushToolCall(raw: any, calls: ToolCall[]) {
if (!raw || typeof raw !== "object") return;
const tool = String(raw.tool || raw.name || "").trim();
if (!tool) return;
if (tool !== "read_file" && tool !== "list_dir" && tool !== "write_file" && tool !== "run_command") return;
calls.push({ tool: tool as ToolName, args: raw.args || raw.parameters || {} });
}

function truncate(value: string, maxChars: number): string {
if (value.length <= maxChars) return value;
return `${value.slice(0, maxChars)}\n…truncated (${value.length - maxChars} chars)`;
}

export async function executeToolCall(call: ToolCall, options: ToolExecutionOptions): Promise<ToolResult> {
const { cwd, maxFileBytes, maxDirEntries, maxOutputChars } = options;
const args = call.args || {};

try {
switch (call.tool) {
case "read_file": {
const path = String(args.path || "").trim();
if (!path) throw new Error("Missing path");
const fullPath = resolve(cwd, path);
const file = await readFile(fullPath);
const limited = file.length > maxFileBytes ? file.subarray(0, maxFileBytes) : file;
const text = limited.toString("utf-8");
const output = file.length > maxFileBytes ? `${text}\n…truncated (${file.length - maxFileBytes} bytes)` : text;
return { tool: call.tool, ok: true, output: truncate(output, maxOutputChars) };
}
case "list_dir": {
const path = String(args.path || ".").trim();
const fullPath = resolve(cwd, path);
const entries = await readdir(fullPath);
const sliced = entries.slice(0, maxDirEntries);
const labeled = await Promise.all(
sliced.map(async (entry) => {
const fullEntry = resolve(fullPath, entry);
try {
const stats = await stat(fullEntry);
return stats.isDirectory() ? `${entry}/` : entry;
} catch {
return entry;
}
})
);
const output = labeled.join("\n") + (entries.length > maxDirEntries ? `\n…truncated (${entries.length - maxDirEntries} more)` : "");
return { tool: call.tool, ok: true, output: truncate(output, maxOutputChars) };
}
case "write_file": {
const path = String(args.path || "").trim();
const content = String(args.content ?? "");
if (!path) throw new Error("Missing path");
const fullPath = resolve(cwd, path);
await mkdir(resolve(fullPath, ".."), { recursive: true });
await writeFile(fullPath, content, "utf-8");
return { tool: call.tool, ok: true, output: `Wrote ${content.length} chars to ${path}` };
}
case "run_command": {
const command = String(args.command || "").trim();
if (!command) throw new Error("Missing command");
const { stdout, stderr } = await execAsync(command, { cwd, maxBuffer: maxOutputChars * 2 });
const output = [stdout, stderr].filter(Boolean).join("\n");
return { tool: call.tool, ok: true, output: truncate(output || "(no output)", maxOutputChars) };
}
default:
throw new Error(`Unsupported tool: ${call.tool}`);
}
} catch (e: any) {
return {
tool: call.tool,
ok: false,
output: "",
error: e?.message || String(e),
};
}
}

export function formatToolResult(result: ToolResult): string {
const status = result.ok ? "ok" : "error";
const header = `Tool ${result.tool} (${status})`;
const body = result.ok ? result.output : result.error || "Unknown error";
return `${header}\n${body}`;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new tool execution functionality in src/tools/index.ts, src/tools/permissions.ts, and src/tools/prompt.ts lacks test coverage. Given that the repository has comprehensive tests for other services (sse.test.ts, commandService.test.ts, etc.), these new security-sensitive features should also have corresponding test files to verify tool parsing, execution, permission resolution, and error handling.

Copilot uses AI. Check for mistakes.

function splitIntoSegments(content: string): Segment[] {
const segments: Segment[] = [];
const regex = /```(\\w+)?\\n([\\s\\S]*?)```/g;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Character '\' is repeated in the same character class.

Suggested change
const regex = /```(\\w+)?\\n([\\s\\S]*?)```/g;
const regex = /```(\w+)?\n([\s\S]*?)```/g;

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings December 28, 2025 03:48
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 46 out of 47 changed files in this pull request and generated 21 comments.

Comment on lines 171 to 173
...messagesRef.current.filter(
(m) => !(m.role === "system" && m.content.startsWith("Tool "))
),
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tool execution logic filters out system messages that start with "Tool " when building history for the API. However, this string-prefix check is fragile and could inadvertently filter out legitimate system messages that happen to start with "Tool ". Consider using a more robust approach like adding a metadata flag to tool-related messages or using a specific message type.

Copilot uses AI. Check for mistakes.
Comment on lines +173 to +185
function guessLanguage(path: string): string {
const ext = extname(path).toLowerCase().replace(".", "");
if (!ext) return "text";
if (ext === "ts") return "typescript";
if (ext === "tsx") return "tsx";
if (ext === "js") return "javascript";
if (ext === "jsx") return "jsx";
if (ext === "py") return "python";
if (ext === "json") return "json";
if (ext === "md") return "markdown";
if (ext === "sh") return "bash";
return ext;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function guessLanguage returns language identifiers but some of the mappings may not align with common syntax highlighting libraries. For example, "jsx" and "tsx" are often aliased to "javascript" and "typescript" respectively in many highlighters. Consider documenting which syntax highlighting library is being targeted or ensuring these identifiers match the expected format.

Copilot uses AI. Check for mistakes.
): LlmProvider {
const url = baseUrl || "";
if (url.includes("azure.com") || url.includes("/openai/")) return "azure";
if (url.includes("litellm") || url.includes("/llm/") || (model && model.includes("/"))) return "litellm";
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider inference logic uses model.includes("/") as a heuristic for LiteLLM-style models. However, this could produce false positives if a user sets a custom model name containing a slash for non-LiteLLM purposes. Consider adding a comment explaining this heuristic or making it more robust by checking for specific prefixes like "openai/", "anthropic/", etc.

Copilot uses AI. Check for mistakes.
Comment on lines 321 to 481
export function handleToolsCommand(
args: string | undefined,
context: CommandContext
): CommandResult {
const { settings, setSettings } = context;
const trimmed = (args || "").trim().toLowerCase();

if (!trimmed) {
const status = settings.tools?.enabled ? "Enabled" : "Disabled";
const auto = settings.tools?.autoApprove ? "Enabled" : "Disabled";
const perms = settings.tools?.permissions || {};
const permSummary = [
`read_file=${perms.read_file || "allow"}`,
`list_dir=${perms.list_dir || "allow"}`,
`write_file=${perms.write_file || "ask"}`,
`run_command=${perms.run_command || "ask"}`,
`external_directory=${perms.external_directory || "ask"}`,
`doom_loop=${perms.doom_loop || "ask"}`,
].join(" ");
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Tools: ${status}\nAuto-approve: ${auto}\nPermissions: ${permSummary}\nUsage: /tools on|off|auto [on|off]\n /tools perm <tool> <allow|ask|deny>`,
timestamp: new Date(),
},
};
}

const parts = trimmed.split(/\s+/);
const sub = parts[0] || "";

if (sub === "perm" || sub === "permission") {
const rawTool = parts[1];
const rawMode = parts[2];
if (!rawTool || !rawMode) {
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Usage: /tools perm <tool> <allow|ask|deny>",
timestamp: new Date(),
},
};
}
const toolMap: Record<string, string> = {
read: "read_file",
read_file: "read_file",
list: "list_dir",
list_dir: "list_dir",
write: "write_file",
write_file: "write_file",
run: "run_command",
run_command: "run_command",
external: "external_directory",
external_directory: "external_directory",
doom: "doom_loop",
doom_loop: "doom_loop",
};
const toolKey = toolMap[rawTool];
if (!toolKey) {
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Unknown tool: ${rawTool}`,
timestamp: new Date(),
},
};
}
if (!["allow", "ask", "deny"].includes(rawMode)) {
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Mode must be allow, ask, or deny.",
timestamp: new Date(),
},
};
}
setSettings((prev) => ({
...prev,
tools: {
...(prev.tools || {}),
permissions: {
...(prev.tools?.permissions || {}),
[toolKey]: rawMode,
},
},
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Permission set: ${toolKey}=${rawMode}`,
timestamp: new Date(),
},
};
}

if (sub === "on" || sub === "enable" || sub === "enabled") {
setSettings((prev) => ({
...prev,
tools: { ...(prev.tools || {}), enabled: true },
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Tools enabled.",
timestamp: new Date(),
},
};
}

if (sub === "off" || sub === "disable" || sub === "disabled") {
setSettings((prev) => ({
...prev,
tools: { ...(prev.tools || {}), enabled: false },
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Tools disabled.",
timestamp: new Date(),
},
};
}

if (sub === "auto") {
const desired = parts[1] || "toggle";
const nextValue =
desired === "on"
? true
: desired === "off"
? false
: !(settings.tools?.autoApprove ?? false);
setSettings((prev) => ({
...prev,
tools: { ...(prev.tools || {}), autoApprove: nextValue },
}));
return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: `Auto-approve ${nextValue ? "enabled" : "disabled"}.`,
timestamp: new Date(),
},
};
}

return {
systemMessage: {
id: generateUniqueId(),
role: "system",
content: "Usage: /tools on|off|auto [on|off]",
timestamp: new Date(),
},
};
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handleToolsCommand function is very long (160+ lines) and handles multiple subcommands. Consider refactoring this into separate handler functions like handleToolsPermissionSubcommand, handleToolsToggleSubcommand, etc., to improve maintainability and testability.

Copilot uses AI. Check for mistakes.
Comment on lines 1 to 88
import type { AppSettings, LlmProvider } from "../types.ts";

const DEFAULT_OPENAI_BASE_URL = "https://api.openai.com/v1";

function normalizeProvider(provider?: string): LlmProvider | undefined {
if (!provider) return undefined;
const normalized = provider.toLowerCase();
if (normalized === "openai" || normalized === "azure" || normalized === "litellm" || normalized === "custom") {
return normalized as LlmProvider;
}
return undefined;
}

function isOpenAIEndpoint(endpoint: string | undefined, envOpenaiBase: string | undefined): boolean {
if (!endpoint) return true;
if (envOpenaiBase && endpoint === envOpenaiBase) return true;
if (endpoint === DEFAULT_OPENAI_BASE_URL) return true;
return endpoint.includes("api.openai.com");
}

function looksLikeOpenAIModel(model: string | undefined, envOpenaiModel: string | undefined): boolean {
if (!model) return true;
if (envOpenaiModel && model === envOpenaiModel) return true;
if (model.includes("/")) return false;
const lower = model.toLowerCase();
return (
lower.startsWith("gpt-") ||
lower.startsWith("o1") ||
lower.startsWith("o3") ||
lower.startsWith("gpt4") ||
lower.startsWith("gpt3")
);
}

export function applyProviderDefaults(settings: AppSettings, provider?: string): AppSettings {
const normalized = normalizeProvider(provider);
if (!normalized) {
return { ...settings, provider };
}

const env = {
openaiBaseUrl: process.env.OPENAI_BASE_URL,
openaiApiKey: process.env.OPENAI_API_KEY,
openaiModel: process.env.OPENAI_MODEL,
litellmBaseUrl: process.env.LITELLM_BASE_URL,
litellmApiKey: process.env.LITELLM_API_KEY,
litellmModel: process.env.LITELLM_MODEL,
};

const next: AppSettings = { ...settings, provider: normalized };

if (normalized === "litellm") {
if (isOpenAIEndpoint(next.endpoint, env.openaiBaseUrl)) {
next.endpoint = env.litellmBaseUrl || undefined;
} else if (!env.litellmBaseUrl && next.endpoint) {
// Leave custom endpoint, but clear OpenAI defaults when LiteLLM envs are missing
next.endpoint = isOpenAIEndpoint(next.endpoint, env.openaiBaseUrl) ? undefined : next.endpoint;
}
if (env.litellmModel) {
if (looksLikeOpenAIModel(next.model, env.openaiModel)) {
next.model = env.litellmModel;
}
} else if (looksLikeOpenAIModel(next.model, env.openaiModel)) {
next.model = undefined;
}
if (env.litellmApiKey) {
if (!next.apiKey || (env.openaiApiKey && next.apiKey === env.openaiApiKey)) {
next.apiKey = env.litellmApiKey;
}
} else if (env.openaiApiKey && next.apiKey === env.openaiApiKey) {
next.apiKey = undefined;
}
}

if (normalized === "openai") {
if (!next.endpoint || next.endpoint === env.litellmBaseUrl) {
next.endpoint = env.openaiBaseUrl || DEFAULT_OPENAI_BASE_URL;
}
if (!next.model || (env.litellmModel && next.model === env.litellmModel)) {
next.model = env.openaiModel || next.model;
}
if (!next.apiKey || (env.litellmApiKey && next.apiKey === env.litellmApiKey)) {
next.apiKey = env.openaiApiKey || next.apiKey;
}
}

return next;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test coverage is missing for the new LiteLLM provider configuration logic. The PR adds src/llm/config.ts and src/llm/providerDefaults.ts with complex provider resolution and default-handling logic, but no tests are included. Consider adding tests to verify provider inference, default resolution, and edge cases like missing environment variables.

Copilot uses AI. Check for mistakes.
const content = String(args.content ?? "");
if (!path) throw new Error("Missing path");
const fullPath = resolve(cwd, path);
await mkdir(resolve(fullPath, ".."), { recursive: true });
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The write_file tool uses resolve(fullPath, "..") to get the parent directory, but this is fragile and could behave unexpectedly if fullPath ends with a slash or has other path quirks. Consider using path.dirname(fullPath) instead for more reliable parent directory resolution.

Copilot uses AI. Check for mistakes.
Comment on lines +136 to +147
if (key.name === "left" || key.name === "right") {
if (ranges.length > 0) {
const delta = key.name === "left" ? -1 : 1;
const current = activeSectionIndex >= 0 ? activeSectionIndex : 0;
const next = (current + delta + ranges.length) % ranges.length;
const target = ranges[next];
if (target) {
context.setSettingsFocusIndex(target.start);
return;
}
}
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The left/right arrow navigation logic for switching between settings sections uses complex index arithmetic that could be error-prone. The logic calculates ranges and wraps around, but if ranges is empty or if there's an off-by-one error, it could cause unexpected behavior. Consider adding a guard condition to handle edge cases or adding comments explaining the arithmetic.

Copilot uses AI. Check for mistakes.
Comment on lines +243 to +244
if (permission.mode === "allow") {
void runTool();
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function uses void runTool() to explicitly ignore the Promise. While this is correct, adding a comment explaining why the promise is intentionally not awaited would help future maintainers understand that this is deliberate and not an oversight.

Copilot uses AI. Check for mistakes.
src/storage.ts Outdated
Comment on lines 38 to 40
model: PREFER_LITELLM ? (LITELLM_MODEL || OPENAI_MODEL) : (OPENAI_MODEL || LITELLM_MODEL),
endpoint: PREFER_LITELLM ? (LITELLM_BASE_URL || OPENAI_BASE_URL) : (OPENAI_BASE_URL || LITELLM_BASE_URL),
apiKey: PREFER_LITELLM ? (LITELLM_API_KEY || OPENAI_API_KEY) : (OPENAI_API_KEY || LITELLM_API_KEY),
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The complex ternary expressions for model, endpoint, and apiKey are difficult to read and maintain. Consider extracting this logic into a helper function like resolveLlmDefaults() that returns an object with these three properties. This would improve readability and testability.

Copilot uses AI. Check for mistakes.

function splitIntoSegments(content: string): Segment[] {
const segments: Segment[] = [];
const regex = /```(\\w+)?\\n([\\s\\S]*?)```/g;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings December 28, 2025 04:19
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 48 out of 49 changed files in this pull request and generated 8 comments.

Comment on lines +88 to +105
function buildApiKeySelectOptions(): PromptSelectOption[] {
const options: PromptSelectOption[] = [];
if (process.env.LITELLM_API_KEY) {
options.push({
name: "Use LITELLM_API_KEY",
description: "Apply the LiteLLM API key from env",
value: process.env.LITELLM_API_KEY,
});
}
if (process.env.OPENAI_API_KEY) {
options.push({
name: "Use OPENAI_API_KEY",
description: "Apply the OpenAI API key from env",
value: process.env.OPENAI_API_KEY,
});
}
options.push({ name: "Custom…", description: "Type a custom API key", value: CUSTOM_SELECT_VALUE });
return dedupeSelectOptions(options);
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function buildApiKeySelectOptions exposes actual API keys in plain text in the select options. While this might be intentional for user convenience, it creates a potential security concern where API keys could be visible in logs, screenshots, or UI traces. Consider either masking these keys (showing only the last 4 characters) or adding a warning in the description about the security implications of displaying API keys.

Copilot uses AI. Check for mistakes.
Comment on lines +114 to +120
case "run_command": {
const command = String(args.command || "").trim();
if (!command) throw new Error("Missing command");
const { stdout, stderr } = await execAsync(command, { cwd, maxBuffer: maxOutputChars * 2 });
const output = [stdout, stderr].filter(Boolean).join("\n");
return { tool: call.tool, ok: true, output: truncate(output || "(no output)", maxOutputChars) };
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_command tool executes arbitrary shell commands without any validation or sanitization. While there's a permission system in place, once approved, any command can run. Consider adding command validation (e.g., disallowing certain dangerous commands like rm -rf /, shell escaping validation) or implementing a command whitelist/blacklist pattern to prevent accidental or malicious execution of destructive commands.

Copilot uses AI. Check for mistakes.
Comment on lines +28 to +29
lower.startsWith("o1") ||
lower.startsWith("o3") ||
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model detection logic at line 28-29 will match models like "o10", "o100", "o3-preview-beta" unexpectedly. The pattern lower.startsWith("o1") or lower.startsWith("o3") is too broad. Consider using more specific patterns like checking for exact model names (e.g., "o1-preview", "o1-mini") or a pattern that ensures word boundaries to avoid false positives.

Copilot uses AI. Check for mistakes.
Comment on lines +74 to +80
const resolvedBaseUrl = baseUrl;

if (!resolvedBaseUrl || !apiKey || !model) return null;

return {
provider,
baseUrl: resolvedBaseUrl,
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 74 assigns baseUrl to resolvedBaseUrl but this assignment is redundant. The variable resolvedBaseUrl can be removed, and line 80 should directly use baseUrl instead, simplifying the code.

Copilot uses AI. Check for mistakes.
Comment on lines +361 to +366
const masked =
options?.shouldMaskValue && typeof displayValue === "string"
? displayValue
? "***" + displayValue.slice(-4)
: "Not set"
: displayValue;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial value of masked is unused, since it is always overwritten.

Copilot uses AI. Check for mistakes.
…g sanitization

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Signed-off-by: Zachary BENSALEM <zachary@qredence.ai>
Copilot AI review requested due to automatic review settings December 28, 2025 06:45
@Zochory Zochory self-assigned this Dec 28, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 50 out of 51 changed files in this pull request and generated 7 comments.

Comment on lines 444 to 446

const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 445-446 are unreachable code. The function always returns at line 439 or 443, so the code creating and returning a system message on lines 445-446 will never execute.

This appears to be dead code left over from refactoring. It should be removed to avoid confusion.

Suggested change
const msg = `${promptMessage}\nCurrent: ${masked}\nUsage: /${settingKey} <value>`;
return { systemMessage: { id: generateUniqueId(), role: "system", content: msg, timestamp: new Date() } };

Copilot uses AI. Check for mistakes.
Comment on lines 14 to 19
export function isExternalPath(path: string, cwd: string): boolean {
const resolved = resolve(cwd, path);
const root = resolve(cwd) + ""; // ensure normalized
if (resolved === root) return false;
return !resolved.startsWith(root + "/");
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The external path detection has a logical issue. On line 18, the check !resolved.startsWith(root + "/") will incorrectly flag paths as external when they should be internal.

For example, if cwd is /home/user/project and path is /home/user/project-docs/file.txt, the resolved path starts with /home/user/project but is actually outside the project directory. The current logic would incorrectly allow this as internal.

The issue is that line 16 adds an empty string to root (which does nothing), and line 18 checks if the resolved path starts with root + "/". However, if resolved === root, we return false early (line 17), but we should also handle the case where resolved is actually a parent or sibling of root.

Consider using a more robust check that normalizes both paths and ensures resolved is actually a child of root, not just a string prefix match.

Copilot uses AI. Check for mistakes.
Comment on lines 57 to 58
if (url.includes("litellm") || url.includes("/llm/") || (model && model.includes("/"))) return "litellm";
return "litellm";
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inferProvider function has redundant logic. Lines 57-58 both return "litellm", meaning the function will always default to "litellm" regardless of the URL or model.

The conditional check on line 57 (if (url.includes("litellm") || url.includes("/llm/") || (model && model.includes("/")))) is unnecessary since line 58 will execute regardless of whether the condition is true or false.

Consider clarifying the intent: if the default should always be "litellm", the entire if statement can be removed. If there should be different logic for detecting non-LiteLLM providers (like detecting actual OpenAI endpoints), that logic needs to be added before the default return.

Copilot uses AI. Check for mistakes.
Comment on lines 160 to 170
try {
const fullPath = resolve(options.cwd, path);
const file = await readFile(fullPath);
const limited = file.length > options.maxFileBytes ? file.subarray(0, options.maxFileBytes) : file;
const text = limited.toString("utf-8");
const suffix = file.length > options.maxFileBytes ? `\n…truncated (${file.length - options.maxFileBytes} bytes)` : "";
const lang = guessLanguage(path);
return `[File: ${path}]\n\`\`\`${lang}\n${text}${suffix}\n\`\`\``;
} catch (e: any) {
return `[File Reference: ${path}]\n\nUnable to read file (${e?.message || e}). Please verify the path.`;
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @file mention handler reads arbitrary files from the filesystem without proper path validation or security checks. While it uses resolve(options.cwd, path), there's no validation to prevent path traversal attacks using sequences like ../../../etc/passwd.

This creates a security risk where:

  1. A malicious user could craft prompts with @file ../../../sensitive-file to read files outside the project directory
  2. An LLM could be manipulated through prompt injection to request sensitive files

Consider adding:

  1. Path validation to ensure the resolved path stays within the project directory (similar to the tool permission system)
  2. A whitelist of allowed file extensions or patterns
  3. Checks against reading sensitive system files
  4. Integration with the external_directory permission check from the tools system

Copilot uses AI. Check for mistakes.
src/index.tsx Outdated
Comment on lines 249 to 251
setPrompt({
type: "confirm",
message: `Allow tool: ${call.tool}?`,
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The confirmation prompt for tool execution doesn't display the tool arguments, making it difficult for users to make informed decisions about whether to approve a tool call. The prompt only shows "Allow tool: read_file?" without showing what file will be read or where it will write.

Consider enhancing the confirmation message to include relevant tool arguments, such as:

  • For read_file/write_file: show the path
  • For run_command: show the command
  • For list_dir: show the directory

This would help users make better security decisions, especially for potentially dangerous operations.

Copilot uses AI. Check for mistakes.
}
): CommandResult {
const { settings, setSettings } = context;
const { settings, setSettings, setMessages, setPrompt } = context;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused variable setSettings.

Suggested change
const { settings, setSettings, setMessages, setPrompt } = context;
const { settings, setMessages, setPrompt } = context;

Copilot uses AI. Check for mistakes.
Comment on lines +372 to +377
const masked =
options?.shouldMaskValue && typeof displayValue === "string"
? displayValue
? "***" + displayValue.slice(-4)
: "Not set"
: displayValue;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial value of masked is unused, since it is always overwritten.

Copilot uses AI. Check for mistakes.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Zachary BENSALEM <zachary@qredence.ai>
Copilot AI review requested due to automatic review settings December 28, 2025 07:58
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Zachary BENSALEM <zachary@qredence.ai>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 55 out of 56 changed files in this pull request and generated 20 comments.

Comments suppressed due to low confidence (1)

src/commandHandlers.ts:261

  • The /help command output references a "settings" section but doesn't include information about the new /tools command and its sub-commands (on/off/auto/perm). Consider adding a tools section to the help output to document the new functionality.
      },
    };
  }
  return {
    systemMessage: {
      id: generateUniqueId(),
      role: "system",
      content: "To clear, run: /clear confirm",
      timestamp: new Date(),
    },
  };
}

export function handleHelpCommand(): CommandResult {
  const systemMsg: Message = {
    id: generateUniqueId(),
    role: "system",
    content: `Help
general
  /help                Show help
  /clear [confirm]     Clear messages
  /status              Show status
  /theme               Toggle theme
  /terminal-setup      Configure terminal keys
settings
  /settings            Open settings panel
  /keybindings [...]   View or edit keybindings
ai
  /provider <name>     Set provider (openai/azure/litellm/custom)
  /model <name>        Set model
  /endpoint <url>      Set API base URL
  /api-key <key>       Set API key
  /tools [args]        Toggle tool execution (on/off/auto, perm ...)
  /af-bridge <url>     Set Agent Framework bridge
  /af-model <name>     Set Agent Framework model
foundry
  /login               Login to Azure
  /foundry-endpoint    Set Foundry Project Endpoint
  /agent               List and select Foundry Agent
workflow
  /mode <standard|workflow>  Switch mode
  /workflow            Workflow controls

Comment on lines +170 to +181
const historyForApi = [
...(toolSystemMessage ? [toolSystemMessage] : []),
...messagesRef.current.filter(
(m) =>
!(
m.role === "system" &&
(m.content.startsWith("Tool ") || m.content.startsWith("Skipped tool"))
)
),
toolResultsMessage,
continueMessage,
];
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The continueWithToolResults function filters out system messages that start with "Tool " or "Skipped tool" on lines 175-177, but doesn't filter the toolSystemMessage or toolResultsMessage that it adds to the history. This could lead to duplicate system prompts accumulating across multiple tool loops.

Copilot uses AI. Check for mistakes.
Comment on lines +454 to +458
}

openInputPrompt();
return { shouldReturn: true };

Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handleSettingCommand function has complex branching logic with multiple return paths. When selectOptions are provided, it opens a prompt and returns early (line 454). When no options are provided, it calls openInputPrompt() and returns (line 457), but then has unreachable code after. Consider restructuring to make the flow clearer and remove dead code.

Suggested change
}
openInputPrompt();
return { shouldReturn: true };
} else {
openInputPrompt();
return { shouldReturn: true };
}

Copilot uses AI. Check for mistakes.
Comment on lines +500 to 506
return;
}
ctx.setSettings((prev) => applyProviderDefaults(prev, value));
},
}
);
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AUTO_SELECT_VALUE constant is defined but only used in the /provider command. The "Auto (detect)" option sets the provider to undefined, which may not be the intended behavior. Consider whether this should clear the provider field or set it to an explicit "auto" string value.

Copilot uses AI. Check for mistakes.
Comment on lines +15 to +32
test("executeToolCall writes and reads files", async () => {
const dir = mkdtempSync(join(tmpdir(), "qlaw-tools-"));
const writeResult = await executeToolCall(
{ tool: "write_file", args: { path: "note.txt", content: "hello" } },
{ cwd: dir, maxFileBytes: 10_000, maxDirEntries: 10, maxOutputChars: 10_000 }
);
expect(writeResult.ok).toBe(true);

const readResult = await executeToolCall(
{ tool: "read_file", args: { path: "note.txt" } },
{ cwd: dir, maxFileBytes: 10_000, maxDirEntries: 10, maxOutputChars: 10_000 }
);
expect(readResult.ok).toBe(true);
expect(readResult.output).toContain("hello");

const raw = readFileSync(join(dir, "note.txt"), "utf-8");
expect(raw).toBe("hello");
});
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The maxFileBytes parameter is set to 10,000 but the test writes "hello" (5 bytes). Consider adding a test case that verifies truncation behavior when a file exceeds the maxFileBytes limit.

Copilot uses AI. Check for mistakes.
Comment on lines +49 to +63

if (isDoomLoop) {
const doomMode = getPermission(settings, "doom_loop");
if (doomMode === "deny") return { mode: "deny", reason: "doom-loop" };
if (doomMode === "ask") mode = "ask";
}

if (mode === "ask" && settings.tools?.autoApprove) {
if (tool === "read_file" || tool === "list_dir") {
mode = "allow";
}
}

return { mode };
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The permission mode "doom_loop" is a great safety feature, but consider adding test coverage for the doom loop detection logic (when the same tool is called 3+ times with identical arguments).

Copilot uses AI. Check for mistakes.
Comment on lines +50 to +59
function inferProvider(baseUrl: string | undefined, model: string | undefined): LlmProvider {
if (isAzureHost(baseUrl)) return "azure";
const host = getHost(baseUrl) || "";
const path = getPathname(baseUrl);
if (host.includes("litellm") || path.includes("/llm/") || (model && model.includes("/"))) {
// Model slash heuristic covers LiteLLM-style prefixes (openai/, anthropic/, etc.).
return "litellm";
}
return "litellm";
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function inferProvider always returns "litellm" as the default on line 58, even when the URL doesn't contain litellm-specific patterns. This means any unknown provider will be treated as litellm. Consider returning "openai" as the fallback for standard OpenAI-compatible endpoints, or adding a "custom" provider type for truly unknown endpoints.

Copilot uses AI. Check for mistakes.
Comment on lines +153 to +187
{
title: "Providers",
items: [
{
id: "useLitellmEnv",
label: "Apply LiteLLM env defaults",
value: "Use LITELLM_*",
description: "Sets endpoint/model/key from LITELLM_* and clears OpenAI defaults",
type: "info" as const,
onActivate: () => setSettings((prev) => applyProviderDefaults(prev, "litellm")),
},
{
id: "useOpenaiEnv",
label: "Apply OpenAI env defaults",
value: "Use OPENAI_*",
description: "Sets endpoint/model/key from OPENAI_* and clears LiteLLM defaults",
type: "info" as const,
onActivate: () => setSettings((prev) => applyProviderDefaults(prev, "openai")),
},
{
id: "clearOverrides",
label: "Clear API overrides",
value: "Reset",
description: "Clears endpoint/model/key so env defaults apply on next launch",
type: "info" as const,
onActivate: () =>
setSettings((prev) => ({
...prev,
endpoint: undefined,
model: undefined,
apiKey: undefined,
})),
},
],
},
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The settingsSections sections now include a new "Providers" section with action items that mutate settings. However, these items have type: "info" which suggests they're informational, but they actually perform actions when activated. Consider using a different type name like "action" or "button" to better reflect their behavior.

Copilot uses AI. Check for mistakes.
maxOutputChars: number;
}

const TOOL_BLOCK_REGEX = /```tool\s*([\s\S]*?)```/gi;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regex pattern /```tool\s*([\s\S]*?)```/gi will match tool blocks anywhere in the text, including within code blocks or other contexts where they shouldn't be treated as tool calls. Consider making the regex more strict to only match tool blocks that appear on their own lines or adding validation to ensure they're not within other fenced code blocks.

Copilot uses AI. Check for mistakes.
Comment on lines +57 to +63
function pushToolCall(raw: any, calls: ToolCall[]) {
if (!raw || typeof raw !== "object") return;
const tool = String(raw.tool || raw.name || "").trim();
if (!tool) return;
if (tool !== "read_file" && tool !== "list_dir" && tool !== "write_file" && tool !== "run_command") return;
calls.push({ tool: tool as ToolName, args: raw.args || raw.parameters || {} });
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pushToolCall function silently ignores tool names that don't match the four supported tools. Consider logging or returning a validation error when an unsupported tool name is encountered, so users understand why their tool call was ignored.

Copilot uses AI. Check for mistakes.
Comment on lines +386 to +391
const masked =
options?.shouldMaskValue && typeof displayValue === "string"
? displayValue
? "***" + displayValue.slice(-4)
: "Not set"
: displayValue;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial value of masked is unused, since it is always overwritten.

Suggested change
const masked =
options?.shouldMaskValue && typeof displayValue === "string"
? displayValue
? "***" + displayValue.slice(-4)
: "Not set"
: displayValue;

Copilot uses AI. Check for mistakes.
…tion or class'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Signed-off-by: Zachary BENSALEM <zachary@qredence.ai>
Copilot AI review requested due to automatic review settings December 28, 2025 08:29
@Zochory
Copy link
Member Author

Zochory commented Dec 28, 2025

@copilot open a new pull request to apply changes based on the comments in this thread

Copy link
Contributor

Copilot AI commented Dec 28, 2025

@Zochory I've opened a new pull request, #18, to work on those changes. Once the pull request is ready, I'll request review from you.

@Zochory
Copy link
Member Author

Zochory commented Dec 28, 2025

@copilot open a new pull request to apply changes based on the comments in this thread

Copy link
Contributor

Copilot AI commented Dec 28, 2025

@Zochory I've opened a new pull request, #19, to work on those changes. Once the pull request is ready, I'll request review from you.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 55 out of 56 changed files in this pull request and generated 22 comments.

const argsObject = call.args || {};
const hasArgs = Object.keys(argsObject).length > 0;
let argsPreview = hasArgs ? JSON.stringify(argsObject, null, 2) : "no arguments";
const MAX_PREVIEW_LENGTH = 200;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MAX_PREVIEW_LENGTH constant is defined inline at line 256. For consistency with other configurable limits in the codebase (like maxFileBytes, maxDirEntries), consider moving this to a constant at the top of the file or making it part of the tool settings.

Copilot uses AI. Check for mistakes.
Comment on lines +23 to +36
const TOOL_PERMISSION_MAP: Record<string, string> = {
read: "read_file",
read_file: "read_file",
list: "list_dir",
list_dir: "list_dir",
write: "write_file",
write_file: "write_file",
run: "run_command",
run_command: "run_command",
external: "external_directory",
external_directory: "external_directory",
doom: "doom_loop",
doom_loop: "doom_loop",
};
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TOOL_PERMISSION_MAP includes multiple keys mapping to the same values (e.g., both "read" and "read_file" map to "read_file"). While this provides flexibility for user input, it's not clear from the code which aliases are canonical. Consider documenting the supported aliases in the command help text or adding a comment explaining why both formats are supported.

Copilot uses AI. Check for mistakes.
Comment on lines +231 to +233
const signature = JSON.stringify({ tool: call.tool, args: call.args || {} });
const count = (toolSignatureCountsRef.current[signature] || 0) + 1;
toolSignatureCountsRef.current[signature] = count;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The toolSignatureCountsRef is never reset except when a new user message is submitted. This means that if a tool is called 3+ times across different turns in the same session, it will be flagged as a doom loop even if the calls are legitimate. Consider adding a decay mechanism or resetting counts periodically to avoid false positives in long-running sessions.

Copilot uses AI. Check for mistakes.
Comment on lines +5 to +8
test("returns empty array for undefined or empty", () => {
expect(parseModelList(undefined)).toEqual([]);
expect(parseModelList("")).toEqual([]);
expect(parseModelList(" ")).toEqual([]);
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test "returns empty array for undefined or empty" includes a test case with spaces (" "), but the implementation trims the input before checking if it's empty. The test correctly expects an empty array, but adding a comment explaining this edge case would improve test clarity.

Copilot uses AI. Check for mistakes.
Comment on lines +89 to +90
if (!next.model || (env.litellmModel && next.model === env.litellmModel)) {
next.model = env.openaiModel || next.model;
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic in lines 89-90 has a fallback chain env.openaiModel || next.model, but if both are undefined, it leaves the model as undefined. This could lead to the same "Missing model" error that was fixed in other parts of the codebase. Consider either ensuring a default model is always set or documenting that this is intentional.

Copilot uses AI. Check for mistakes.
...(prev.tools || {}),
permissions: {
...(prev.tools?.permissions || {}),
[toolKey]: rawMode,
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The /tools perm command doesn't validate that rawMode is properly typed as PermissionMode before assigning it to settings. While the string check ensures it's one of "allow", "ask", or "deny", TypeScript won't recognize this as a type guard. Consider casting the value explicitly: [toolKey]: rawMode as PermissionMode for better type safety.

Copilot uses AI. Check for mistakes.
Comment on lines +14 to +20
export function isExternalPath(path: string, cwd: string): boolean {
const resolved = resolve(cwd, path);
const root = resolve(cwd);
const rel = relative(root, resolved);
if (!rel) return false;
return rel.startsWith("..") || isAbsolute(rel);
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The isExternalPath function has duplicated logic with a different implementation in src/mentionHandlers.ts (line 176). The implementations differ: the tools version checks isAbsolute(rel) while the mentionHandlers version does not. This inconsistency could lead to security issues if one implementation is more permissive than the other. Consider extracting this to a shared utility module to ensure consistent security checks across the codebase.

Copilot uses AI. Check for mistakes.
if (!config) {
onError(
new Error("Missing OPENAI_BASE_URL, OPENAI_API_KEY, or OPENAI_MODEL")
new Error("Missing endpoint, API key, or model. Configure /provider, /endpoint, /api-key, and /model.")
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message on line 32 suggests using multiple commands (/provider, /endpoint, /api-key, /model) but doesn't provide a clear order or priority for which to configure first. Consider improving the message to guide users through the configuration process more clearly, such as: "Missing LLM configuration. Set provider first with /provider, then configure /endpoint, /api-key, and /model."

Copilot uses AI. Check for mistakes.
Comment on lines +254 to +259
const hasArgs = Object.keys(argsObject).length > 0;
let argsPreview = hasArgs ? JSON.stringify(argsObject, null, 2) : "no arguments";
const MAX_PREVIEW_LENGTH = 200;
if (argsPreview.length > MAX_PREVIEW_LENGTH) {
argsPreview = argsPreview.slice(0, MAX_PREVIEW_LENGTH) + "…";
}
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The permission message truncates args to 200 characters with an ellipsis, but this could cut off important information mid-value (e.g., in the middle of a file path or command). Consider truncating at word boundaries or adding a more structured preview format that shows complete key-value pairs up to the limit.

Copilot uses AI. Check for mistakes.
Comment on lines +109 to +110
const fullPath = resolve(cwd, path);
await mkdir(resolve(fullPath, ".."), { recursive: true });
Copy link

Copilot AI Dec 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The write_file tool uses mkdir with { recursive: true } to create parent directories, but it resolves the parent path using resolve(fullPath, ".."). This could potentially create directories outside the intended workspace if fullPath is manipulated. While isExternalPath should catch this in permissions, consider adding an additional safety check before creating directories to ensure they're within the allowed workspace.

Copilot uses AI. Check for mistakes.
@Zochory Zochory marked this pull request as ready for review December 28, 2025 09:24
@Zochory Zochory merged commit a86c270 into main Dec 28, 2025
12 of 14 checks passed
@Zochory Zochory deleted the Zochory/moscow branch December 28, 2025 11:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants