-
Notifications
You must be signed in to change notification settings - Fork 51
Daily branch 2025 10 14 #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…abase errors
- Modified run-terminal-cmd to return only JSON-safe fields (exitCode, stdout, stderr, pid)
- Removed Promise properties (_wait, onPty) that cannot be serialized to Convex
- Added comprehensive logging to chat-handler and db actions for debugging
- Background commands now return { pid, stdout, stderr }
- Foreground commands now return { exitCode, stdout, stderr }
Resolves database error: 'Promise {} is not a supported Convex type'
- Add BackgroundProcessTracker class to track background processes and their output files - Extract output file paths from commands using pattern matching (nmap, redirects, tee, generic -o flags) - Block file retrieval via get_terminal_files when background processes are still writing - Auto-check process status using ps and remove completed processes - Track processes per-file for granular blocking (only block specific files being written) - Strip quotes from filenames for accurate path matching - Update ToolContext to include backgroundProcessTracker instance This prevents incomplete file retrieval when background scans/processes are still running.
- Implement graceful shutdown 10s before Vercel hard timeout - Extract timeout logic to createPreemptiveTimeout in stream-cancellation.ts - Add isPreemptive flag to distinguish from user cancellations - Pre-emptive aborts save all data (messages, todos, title) before timeout - User aborts skip save (frontend already handled) - Prevent 'Task timed out' errors while ensuring data persistence
…vide for moderation
- Add plan-based file limits: Pro (300), Team (500), Ultra (1000), Free (0) - Move validation logic directly into mutations for proper error propagation - Fix 'Server Error' issue by removing internalQuery calls - Add getFileLimit() helper to DRY up entitlement checking - Users now see clear error messages instead of generic server errors
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughIntroduces plan-based file upload limits and service-key vs user flows; adds background process tracking for terminal commands and integrates it into tool context; blocks file retrieval while related background jobs run; adds preemptive chat timeout; switches moderation truncation to token-based; parallelizes sandbox file uploads; updates system prompt guidance. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant Tool as run_terminal_cmd
participant Tracker as BackgroundProcessTracker
participant SB as Sandbox
participant Files as get_terminal_files
User->>Tool: Execute command (is_background?)
Tool->>SB: Start command
alt is_background and pid
Tool->>Tracker: addProcess(pid, command, outputFiles)
Tool-->>User: { pid, stdout, stderr }
User->>Files: Request files [paths]
Files->>Tracker: hasActiveProcessesForFiles(paths)
alt active processes for paths
Files-->>User: Blocked files with reasons
else no active processes
Files->>SB: Read files
Files-->>User: fileUrls
end
else foreground
Tool->>SB: Wait for completion
Tool-->>User: { exitCode, stdout, stderr }
end
sequenceDiagram
autonumber
participant Client
participant Chat as chat-handler
participant Timeout as createPreemptiveTimeout
participant Stream as Model Stream
Client->>Chat: Start chat
Chat->>Timeout: Schedule preemptive abort (mode,safetyBuffer)
Chat->>Stream: Begin streaming
alt Preemptive timeout fires
Timeout-->>Chat: AbortController.abort()
Chat->>Chat: Persist state on preemptive abort
else Normal completion
Stream-->>Chat: onFinish
end
Chat->>Timeout: clear()
Chat-->>Client: Response (partial or complete)
sequenceDiagram
autonumber
participant Client
participant CFS as convex/fileStorage.generateUploadUrl
participant CFA as convex/fileActions.saveFile
participant Store as Storage
participant Proc as processFileAuto
participant DB as internal mutations
Client->>CFS: generateUploadUrl(args, identity or serviceKey)
CFS->>CFS: Resolve actingUserId + entitlements
CFS->>DB: countUserFiles(actingUserId)
alt limit exceeded or free plan
CFS-->>Client: Error (limit/paid required)
else within limit
CFS-->>Client: uploadUrl
Client->>Store: Upload file
Client->>CFA: saveFile(storageId, meta)
CFA->>Proc: processFileAuto(storageObject)
alt token-limit or processing error
CFA->>Store: delete(storageId)
CFA-->>Client: Error
else success
CFA->>DB: save metadata (tokenSize, content?)
CFA-->>Client: ok
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/moderation.ts (1)
78-127: Handle legacy string-only messages when combining moderation input.
findTargetMessagenow only accumulates text fromparts. Any user message that still uses the legacycontentstring (we still support this inprepareInput) never incrementscombinedContent, so we never reach the length threshold nor the fallback, and moderation silently skips those messages. If a string-only message does make it through (e.g., because another message provided the length),createCombinedMessagediscards its text entirely, leavingparts: []and producing an empty moderation payload. This is a regression from the prior behaviour.Please restore support for string-based messages when accumulating and when building the combined message, e.g.:
@@ - messagesToCombine.push(message); - - // Handle UIMessage format with parts array - if (message.parts && Array.isArray(message.parts)) { - const textContent = message.parts - .filter((part: any) => part.type === "text") - .map((part: any) => part.text) - .join(" "); - - combinedContent = textContent + " " + combinedContent; - } + messagesToCombine.push(message); + + let textContent = ""; + if (message.parts && Array.isArray(message.parts)) { + textContent = message.parts + .filter((part: any) => part.type === "text") + .map((part: any) => part.text) + .join(" "); + } else if (typeof message.content === "string") { + textContent = message.content; + } + + if (textContent.trim().length > 0) { + combinedContent = combinedContent + ? `${textContent} ${combinedContent}` + : textContent; + } @@ - if (message.parts && Array.isArray(message.parts)) { + if (message.parts && Array.isArray(message.parts)) { const textParts = message.parts.filter( (part: any) => part.type === "text", ); combinedParts.push(...textParts); + } else if (typeof message.content === "string" && message.content.trim()) { + combinedParts.push({ + type: "text", + text: message.content, + }); } } - return { + if (combinedParts.length === 0 && messages[0]) { + return messages[0]; + } + + return { role: "user", parts: combinedParts, };This keeps the new multi-message flow while preserving moderation coverage for all existing message shapes.
🧹 Nitpick comments (8)
lib/utils/sandbox-file-utils.ts (1)
97-114: Excellent parallelization improvement!The refactoring to fetch and write files in parallel is a solid performance enhancement. The two-phase approach (fetch all → write all) is clean, and gracefully handling failures by returning
nullmakes the operation resilient to individual file errors.Optional type safety enhancement:
Consider explicitly typing the intermediate results for better clarity:
- const fileDataPromises = sandboxFiles.map(async (file) => { + const fileDataPromises = sandboxFiles.map(async (file): Promise<{ localPath: string; data: ArrayBuffer } | null> => { if (!file.url || !file.localPath) return null; const res = await fetch(file.url); if (!res.ok) return null; const ab = await res.arrayBuffer(); return { localPath: file.localPath, data: ab }; });lib/utils/stream-cancellation.ts (1)
91-119: Clamp safetyBuffer, auto‑clear on abort, and avoid magic durations.
- If safetyBuffer ≥ maxDuration, setTimeout will fire immediately and abort the stream prematurely.
- Add an abort listener to clear the timer when the stream aborts early.
- Consider centralizing per‑mode max durations in config.
Apply this diff:
export const createPreemptiveTimeout = ({ chatId, mode, abortController, safetyBuffer = 10, }: PreemptiveTimeoutOptions) => { - const maxDuration = mode === "agent" ? 500 : 120; - const maxStreamTime = (maxDuration - safetyBuffer) * 1000; + // Max durations (seconds) per mode – consider moving to config + const maxDurationSec = mode === "agent" ? 500 : 120; + // Clamp safety buffer to [0, maxDurationSec - 1] + const bufferSec = Math.max(0, Math.min(safetyBuffer, maxDurationSec - 1)); + const maxStreamTime = Math.max(0, (maxDurationSec - bufferSec) * 1000); let isPreemptive = false; - const timeoutId = setTimeout(() => { + let timeoutId: NodeJS.Timeout; + const onAbort = () => { + clearTimeout(timeoutId); + }; + abortController.signal.addEventListener("abort", onAbort, { once: true }); + + timeoutId = setTimeout(() => { console.log( - `[Chat ${chatId}] Pre-emptive abort triggered (${safetyBuffer}s before ${maxDuration}s timeout)`, + `[Chat ${chatId}] Pre-emptive abort triggered (${bufferSec}s before ${maxDurationSec}s timeout)`, ); isPreemptive = true; abortController.abort(); }, maxStreamTime); return { timeoutId, - clear: () => clearTimeout(timeoutId), + clear: () => { + clearTimeout(timeoutId); + abortController.signal.removeEventListener("abort", onAbort); + }, isPreemptive: () => isPreemptive, }; };Please confirm the per‑mode max durations (agent: 500s, ask: 120s) match your platform limits; if not, we should pull these from configuration.
convex/fileStorage.ts (1)
34-53: Extend plan mapping and export for reuse to avoid drift.
- Add monthly/yearly variants for team plans, like the pro/ultra checks.
- Export this helper so fileActions.ts can reuse it instead of duplicating the mapping.
Apply this diff:
-const getFileLimit = (entitlements: Array<string>): number => { +export const getFileLimit = (entitlements: Array<string>): number => { if ( entitlements.includes("ultra-plan") || entitlements.includes("ultra-monthly-plan") || entitlements.includes("ultra-yearly-plan") ) { return 1000; } - if (entitlements.includes("team-plan")) { + if ( + entitlements.includes("team-plan") || + entitlements.includes("team-monthly-plan") || + entitlements.includes("team-yearly-plan") + ) { return 500; } if ( entitlements.includes("pro-plan") || entitlements.includes("pro-monthly-plan") || entitlements.includes("pro-yearly-plan") ) { return 300; } return 0; // Free users };convex/fileActions.ts (1)
434-451: Deduplicate plan limit logic and include team monthly/yearly variants.Reuse the shared getFileLimit helper to avoid drift and ensure consistent plan handling (including team-monthly/yearly).
Apply this diff to replace the manual mapping:
- // Check file limit (Pro: 300, Team: 500, Ultra: 1000, Free: 0) - let fileLimit = 0; - if ( - entitlements.includes("ultra-plan") || - entitlements.includes("ultra-monthly-plan") || - entitlements.includes("ultra-yearly-plan") - ) { - fileLimit = 1000; - } else if (entitlements.includes("team-plan")) { - fileLimit = 500; - } else if ( - entitlements.includes("pro-plan") || - entitlements.includes("pro-monthly-plan") || - entitlements.includes("pro-yearly-plan") - ) { - fileLimit = 300; - } + // Check file limit (Pro: 300, Team: 500, Ultra: 1000, Free: 0) + const fileLimit = getFileLimit(entitlements);Add this import at the top of the file (outside the selected range):
import { getFileLimit } from "./fileStorage";And ensure getFileLimit is exported from convex/fileStorage.ts as shown in the related suggestion.
lib/ai/tools/get-terminal-files.ts (1)
32-49: Consider batching background process checks for efficiency.Currently,
hasActiveProcessesForFilesis called once per file in a loop. While this works correctly, it may be inefficient if checking many files, as each call iterates through all tracked processes and checks their status via sandbox commands.Consider batching the check by calling
hasActiveProcessesForFiles(sandbox, files)once before the loop and then checking individual files against the result. This would reduce the number ofpscommands executed:+ // Check all files upfront for active processes + const { active: anyActive, processes: activeProcesses } = + await backgroundProcessTracker.hasActiveProcessesForFiles(sandbox, files); + + // Create a map of blocked files for quick lookup + const blockedMap = new Map<string, BackgroundProcess[]>(); + if (anyActive) { + for (const filePath of files) { + const matchingProcesses = activeProcesses.filter((p) => + p.outputFiles.some((outputFile) => { + const normalizedOutput = outputFile.trim().replace(/\/+/g, "/"); + const normalizedRequested = filePath.trim().replace(/\/+/g, "/"); + return ( + normalizedOutput === normalizedRequested || + normalizedOutput.endsWith("/" + normalizedRequested) || + normalizedRequested.endsWith("/" + normalizedOutput) || + normalizedOutput.endsWith(normalizedRequested) || + normalizedRequested.endsWith(normalizedOutput) + ); + }) + ); + if (matchingProcesses.length > 0) { + blockedMap.set(filePath, matchingProcesses); + } + } + } + for (const filePath of files) { - // Check if this specific file is being written to by a background process - const { active, processes } = - await backgroundProcessTracker.hasActiveProcessesForFiles(sandbox, [ - filePath, - ]); - - if (active) { - const processDetails = processes + const matchingProcesses = blockedMap.get(filePath); + if (matchingProcesses) { + const processDetails = matchingProcesses .map((p) => `PID ${p.pid}: ${p.command}`) .join(", "); blockedFiles.push({ path: filePath, reason: `Background process still running: [${processDetails}]`, }); continue; }Note: This optimization requires duplicating the path normalization logic, which could be extracted into a shared helper method to reduce duplication.
lib/ai/tools/run-terminal-cmd.ts (1)
189-199: Consider validating PID before tracking background processes.While the logic correctly checks for
(exec as any)?.pidbefore tracking, there's no validation that the PID is a valid positive number. Invalid or zero PIDs could cause issues in subsequent tracking operations.Apply this diff to add PID validation:
// Track background processes with their output files - if (is_background && (exec as any)?.pid) { + if (is_background && (exec as any)?.pid && typeof (exec as any).pid === 'number' && (exec as any).pid > 0) { const pid = (exec as any).pid; const outputFiles = BackgroundProcessTracker.extractOutputFiles(command); backgroundProcessTracker.addProcess( pid, command, outputFiles, ); }lib/ai/tools/utils/background-process-tracker.ts (2)
36-57: Consider more robust process status checking.The current check
result.stdout.includes(pid.toString())may produce false positives if the PID appears elsewhere in the output (e.g., in a parent PID column or process arguments).Consider parsing the
psoutput more carefully to verify the PID is in the correct column:async checkProcessStatus(sandbox: Sandbox, pid: number): Promise<boolean> { try { const result = await sandbox.commands.run(`ps -p ${pid}`, { user: "root" as const, cwd: "/home/user", }); - const isRunning = result.stdout.includes(pid.toString()); + // ps -p outputs header + process line if running, or just header if not + const lines = result.stdout.trim().split('\n'); + // More than 1 line means process exists (header + process) + const isRunning = lines.length > 1 && lines.some(line => line.trim().startsWith(pid.toString())); if (!isRunning) { this.removeProcess(pid); } return isRunning; } catch (error) { this.removeProcess(pid); return false; } }Alternatively, you could use
ps -p ${pid} -o pid=which outputs only the PID with no header, making the check simpler and more reliable.
143-165: Review shell redirection pattern for edge cases.The shell redirection pattern (Pattern 3, line 144) attempts to capture output files but may have issues:
/(?:^|[|;&])\s*[^|;&]*?\s+>>?\s+([^\s|;&]+)/gThis pattern tries to find
>or>>followed by a filename, but:
- The negative character class
[^\s|;&]+will stop at the first space, pipe, or semicolon, which means it won't capture filenames with spaces (even if quoted).- It may miss redirections in complex command structures or subshells.
Consider whether this pattern adequately handles your use cases. If filenames with spaces (even quoted) are possible, you may need a more sophisticated parser. For now, document the limitation:
- // Pattern 3: Shell redirection > file or >> file + // Pattern 3: Shell redirection > file or >> file + // Note: Does not handle filenames with spaces, even if quoted const redirectPattern = /(?:^|[|;&])\s*[^|;&]*?\s+>>?\s+([^\s|;&]+)/g;Additionally, consider if this approach of regex-based extraction is sufficient for your use cases, or if you need a more robust shell command parser for complex scenarios.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
convex/fileActions.ts(1 hunks)convex/fileStorage.ts(2 hunks)lib/ai/tools/get-terminal-files.ts(3 hunks)lib/ai/tools/index.ts(3 hunks)lib/ai/tools/run-terminal-cmd.ts(4 hunks)lib/ai/tools/utils/background-process-tracker.ts(1 hunks)lib/api/chat-handler.ts(7 hunks)lib/moderation.ts(6 hunks)lib/system-prompt.ts(1 hunks)lib/utils/sandbox-file-utils.ts(1 hunks)lib/utils/stream-cancellation.ts(3 hunks)types/agent.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id from ./_generated/dataModel for document IDs (e.g., Id<'users'>) instead of string
Ensure Record key/value types align with validators (e.g., v.record(v.id('users'), v.string()) -> Record<Id<'users'>, string>)
Use as const on string literals in discriminated unions
When using Array type, explicitly declare arrays with const array: Array = [...]
When using Record type, explicitly declare with const record: Record<KeyType, ValueType> = {...}
Files:
lib/system-prompt.tsconvex/fileStorage.tslib/api/chat-handler.tslib/ai/tools/index.tslib/ai/tools/run-terminal-cmd.tslib/moderation.tsconvex/fileActions.tslib/utils/sandbox-file-utils.tstypes/agent.tslib/ai/tools/get-terminal-files.tslib/ai/tools/utils/background-process-tracker.tslib/utils/stream-cancellation.ts
convex/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
convex/**/*.{ts,tsx,js,jsx}: ALWAYS use the new Convex function syntax (query/mutation/action object with args/returns/handler) when defining Convex functions
When a Convex function returns null, include returns: v.null() and return null
Use internalQuery/internalMutation/internalAction for private functions; do not expose sensitive logic with public query/mutation/action
Do NOT register functions via the api or internal objects
ALWAYS provide args and returns validators for query, mutation, action and their internal variants; use v.null() when no value is returned
From Convex functions, use ctx.runQuery/ctx.runMutation/ctx.runAction to call other functions (not direct function calls)
Pass a FunctionReference (from api/internal) to ctx.runQuery/ctx.runMutation/ctx.runAction; do NOT pass the callee function directly
Only call an action from another action when crossing runtimes (e.g., V8 to Node); otherwise refactor shared code into helpers
Minimize calls from actions to queries/mutations to reduce race conditions; prefer fewer transactional boundaries
Organize public query/mutation/action functions under convex/ to align with file-based routing
For pagination, validate args with paginationOptsValidator and call .paginate(args.paginationOpts) on queries
Use v.int64() instead of deprecated v.bigint() for signed 64-bit integers
Use v.record() for record-like structures; v.map() and v.set() are not supported
Do NOT use query().filter(...); define an index in schema and use withIndex instead
Convex queries do not support .delete(); collect or iterate results and call ctx.db.delete(_id) for each
Use .unique() to fetch exactly one document and fail on multiples
When async-iterating query results, prefer for await (const row of query) over .collect() or .take(n)
Use ctx.db.replace to fully replace an existing document; ctx.db.patch to shallow-merge updates
Files defining actions that use Node built-ins must start with "use node"
Never access ctx.db from within an action
D...
Files:
convex/fileStorage.tsconvex/fileActions.ts
convex/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
When run-calling a function in the same file, add an explicit return type annotation on the await result to avoid TS circularity issues
Files:
convex/fileStorage.tsconvex/fileActions.ts
🧠 Learnings (1)
📚 Learning: 2025-09-01T12:44:12.626Z
Learnt from: RostyslavManko
PR: hackerai-tech/hackerai#17
File: convex/fileStorage.ts:131-149
Timestamp: 2025-09-01T12:44:12.626Z
Learning: In convex/fileStorage.ts, serviceKey parameters are made optional (v.optional(v.string())) for queries like getFileTokensByFileIds to simplify local development workflow - developers don't need to configure service keys in the Convex dashboard when working locally. In production, the serviceKey is always provided, ensuring security. This pattern balances developer experience with production security requirements.
Applied to files:
convex/fileActions.ts
🧬 Code graph analysis (7)
convex/fileStorage.ts (1)
convex/chats.ts (1)
validateServiceKey(5-9)
lib/api/chat-handler.ts (1)
lib/utils/stream-cancellation.ts (1)
createPreemptiveTimeout(95-119)
lib/ai/tools/index.ts (1)
lib/ai/tools/utils/background-process-tracker.ts (1)
BackgroundProcessTracker(10-183)
lib/ai/tools/run-terminal-cmd.ts (2)
types/agent.ts (1)
ToolContext(18-27)lib/ai/tools/utils/background-process-tracker.ts (1)
BackgroundProcessTracker(10-183)
convex/fileActions.ts (1)
convex/tempStreams.ts (1)
validateServiceKey(4-8)
types/agent.ts (1)
lib/ai/tools/utils/background-process-tracker.ts (1)
BackgroundProcessTracker(10-183)
lib/utils/stream-cancellation.ts (1)
types/chat.ts (1)
ChatMode(4-4)
🔇 Additional comments (21)
lib/system-prompt.ts (1)
108-116: Good addition to sequential guidance.Clear example reinforcing dependency-aware sequencing with file-producing scans.
lib/api/chat-handler.ts (5)
141-147: Pre‑emptive timeout setup looks good.Clean integration with the AbortController before streaming starts.
Do you want a config‑based safetyBuffer per mode (e.g., env or app config) instead of relying on the default?
329-331: Clear pre‑emptive timeout on finish.Ensures no dangling timers after stream completion.
375-381: Skip message save only for user‑initiated aborts (not pre‑emptive).Good guard to persist on pre‑emptive aborts while avoiding duplicate saves for user aborts with no new files.
423-425: Clear pre‑emptive timeout on error path.Prevents timer leakage when errors occur before onFinish.
305-314: Dynamic event suffix is safe — command is fixed enum
input.command is defined as z.enum(['search','open_url']), so there’s no arbitrary user input or high cardinality/PII risk.Likely an incorrect or invalid review comment.
convex/fileStorage.ts (2)
68-88: Service key vs user-auth flow is sound.Validates key, requires userId for service flow, and normalizes entitlements for user flow.
90-106: Dynamic per-plan limit enforcement is correct; double-check UX copy.The 0‑limit path throws a clear “Paid plan required for file uploads” and enforces current count vs limit. Consider harmonizing error copy with frontend expectations.
Confirm whether limits should count all user files or only attached files; adjust countUserFiles accordingly if product requirement differs.
convex/fileActions.ts (2)
412-432: Service key and user-auth branches look correct.Identity and entitlements extraction are consistent with fileStorage.ts.
456-465: Limit check before processing is good.Prevents unnecessary blob fetch/processing when over limit; pairs well with the earlier generateUploadUrl check.
types/agent.ts (1)
6-6: LGTM! BackgroundProcessTracker integrated into ToolContext.The import and field addition correctly extend the ToolContext interface with background process tracking capabilities. The type import and field declaration align with the implementation in lib/ai/tools/utils/background-process-tracker.ts.
Also applies to: 26-26
lib/ai/tools/index.ts (1)
16-16: LGTM! BackgroundProcessTracker properly wired into tool context.The tracker is correctly instantiated and added to the ToolContext following the same pattern as other context managers (TodoManager, FileAccumulator). This enables all tools to access background process tracking capabilities.
Also applies to: 41-41, 51-51
lib/ai/tools/get-terminal-files.ts (2)
51-66: LGTM! Improved error handling for file uploads.The change to handle upload failures gracefully by adding them to
blockedFilesinstead of failing the entire operation is a good improvement. This allows partial success and provides clear error messages to users about which files couldn't be retrieved and why.
68-82: LGTM! Clear and informative result messaging.The result message construction properly handles all scenarios:
- Success case with file count
- Blocked files with detailed reasons
- Default message when no files retrieved
This provides a good user experience with clear feedback about what happened.
lib/ai/tools/run-terminal-cmd.ts (2)
64-66: LGTM! Clear guidance for background mode usage.The updated description properly warns against using background mode when immediate file retrieval is needed, preventing the scenario where
get_terminal_fileswould block files due to active background processes. This aligns well with the blocking logic implemented in get-terminal-files.ts.
202-212: LGTM! Appropriate result shapes for different execution modes.The differentiated result structures make sense:
- Background processes return PID for tracking (no exit code yet)
- Foreground processes return exit code 0 for successful execution
- Error cases are handled separately via CommandExitError (lines 224-235)
This design properly reflects the different semantics of background vs foreground execution.
lib/ai/tools/utils/background-process-tracker.ts (5)
3-15: LGTM! Clean interface and constructor.The
BackgroundProcessinterface appropriately captures the essential tracking data (PID, command, output files, start time). The constructor correctly initializes the internal process map.
17-34: LGTM! Straightforward process lifecycle management.The
addProcessandremoveProcessmethods correctly manage the process map. TheaddProcessmethod properly captures all required metadata including the start timestamp.
99-112: LGTM! Appropriate path normalization.The
normalizePathmethod correctly handles common path variations (trimming whitespace, normalizing slashes, removing leading "./"). This is a good foundation for path comparison.
114-141: LGTM! Correct nmap output file patterns.The patterns correctly identify nmap output files:
- Pattern 1 captures
-oN,-oX,-oGflags with their file arguments- Pattern 2 appropriately expands
-oA prefixinto the three generated files (.nmap,.xml,.gnmap)The quote stripping and deduplication (line 167) are handled properly.
170-182: LGTM! Useful helper methods for debugging and cleanup.The
getTrackedProcessesandclearmethods provide appropriate debugging and cleanup capabilities. The implementation correctly converts the map values to an array and clears the map respectively.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Documentation