feat(loop): add streaming output mode with --stream flag#1605
feat(loop): add streaming output mode with --stream flag#1605Crunchyman-ralph merged 3 commits intonextfrom
Conversation
🦋 Changeset detectedLatest commit: b428fdb The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds verbose/streaming output and an option to omit full model output for the loop command; wires lifecycle and streaming callbacks into loop execution, validates incompatible options (verbose + sandbox), and updates progress header and tests. Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI Command
participant LS as Loop Service
participant EXEC as Executor (spawn/sandbox)
participant CB as Callbacks
CLI->>LS: startLoop(config + callbacks)
LS->>LS: validate options (verbose + sandbox)
LS->>CB: onIterationStart(i, total)
alt verbose/stream enabled
LS->>EXEC: spawn with streaming args
EXEC-->>LS: stream of JSON lines/events
LS->>LS: buffer & parse lines
LS->>CB: onText(text) / onToolUse(tool) / onStderr(iter, text)
LS->>CB: onOutput(finalOutput) [if includeOutput]
else non-stream
LS->>EXEC: spawn normally
EXEC-->>LS: full result
LS->>CB: onOutput(result) [if includeOutput]
end
LS->>CB: onIterationEnd(iteration)
LS->>CB: onError(message) [if error]
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@apps/extension/package.json`:
- Line 278: The dependency "task-master-ai" in package.json is pinned to
"0.42.0-rc.0" causing workspace linking to resolve externally; update the
version spec for "task-master-ai" to "*" (matching other internal workspace
members like "@tm/core") so npm resolves the local workspace member instead of
the registry.
In `@CHANGELOG.md`:
- Around line 3-99: Add a Minor Changes entry for PR `#1605` to both the 0.42.0
and 0.42.0-rc.0 sections in CHANGELOG.md following the existing pattern ("-
[`#PR`](...) [`commit`] Thanks [`@author`]! - Description") that highlights the new
loop flags: add a bullet like "Add --stream and --no-output options to loop
command to stream live output or suppress output for unattended runs" (mirror
the exact text into both sections), include the PR link and commit hash
placeholder, and ensure it's placed alongside other "Minor Changes" entries
under the 0.42.0 and 0.42.0-rc.0 headings.
In `@packages/tm-core/src/modules/loop/services/loop.service.ts`:
- Around line 594-608: In handleStreamEvent, stop injecting extra newlines into
streamed chunks: remove the concatenated '\n' when calling onText (use
onText(block.text)) and stop logging with added line breaks (log the raw
block.text or remove the console.log) so chunks are emitted exactly as received;
ensure only existing newlines in block.text are preserved so parseCompletion can
match tags like <loop-complete> correctly.
8a888e6 to
da17063
Compare
- Add --stream flag for real-time output display using stream-json format - Add --no-output flag to exclude full output from iteration results - Add validation rejecting incompatible stream + sandbox combination - Fix race condition between error/close events with resolveOnce wrapper - Fix null safety check for child.stdout before attaching listeners - Fix race condition in buffer processing with proper stdout 'end' handling - Add JSON parse error logging for debugging malformed events - Add event structure validation before accessing properties - Add stream cleanup on error (remove listeners, kill process) - Enhance JSDoc to document config/result field relationships
da17063 to
5657152
Compare
- Add LoopOutputCallbacks interface for presentation layer separation - Move all console.log/error from loop.service.ts to CLI callbacks - CLI provides chalk-formatted callback implementations - Move stream+sandbox validation to run() start (fail once, not per iteration) - Simplify streaming: use result event for output, onText for display only - Export LoopOutputCallbacks from tm-core public API Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/loop.command.ts`:
- Around line 183-187: The onIterationStart callback in loop.command.ts has
formatting issues flagged by the Biome formatter; run the formatter (biome
format .) or reformat the onIterationStart block (the anonymous function
handling iteration logging in the loop command) to match project
style/whitespace conventions so CI passes—ensure the console.log and chalk.cyan
lines are formatted according to Biome rules and then commit the formatted file.
♻️ Duplicate comments (3)
packages/tm-core/src/modules/loop/services/loop.service.ts (3)
417-426: Child process not killed when stdout is null.When
child.stdoutis null, the code resolves with an error but the spawned process at line 380 continues running. Addchild.kill('SIGTERM')before returning to prevent leaking the process.🐛 Proposed fix
// Handle null stdout (shouldn't happen with pipe, but be defensive) if (!child.stdout) { + child.kill('SIGTERM'); resolveOnce( this.createErrorIteration( iterationNum, startTime, 'Failed to capture stdout from child process' ) ); return; }
479-483: Clear buffer after processing in close handler to prevent duplicates.If
closefires beforeend, the buffer is processed but not cleared. Ifendsubsequently fires, the same content could be processed again. Clear the buffer after processing.🐛 Proposed fix
child.on('close', (exitCode: number | null) => { // Process remaining buffer only if stdout hasn't already ended if (!stdoutEnded && buffer) { processLine(buffer); + buffer = ''; }
454-457: Streaming mode omits stderr from captured output.The
LoopIteration.outputdocumentation states it "Contains concatenated stdout and stderr." However, in streaming mode, stderr is only sent to the callback (line 456) but not accumulated intofinalResult. WhenincludeOutput=true, the returnedoutputwill be incomplete.Consider accumulating stderr alongside the result events:
🐛 Proposed fix
+ let stderrOutput = ''; + child.stderr?.on('data', (data: Buffer) => { const stderrText = data.toString('utf-8'); callbacks?.onStderr?.(stderrText, iterationNum); + stderrOutput += stderrText; }); // ... in close handler: resolveOnce({ iteration: iterationNum, status, duration: Date.now() - startTime, message, - ...(includeOutput && { output: finalResult }) + ...(includeOutput && { output: finalResult + stderrOutput }) });Also applies to: 496-503
| // Capture final result for includeOutput feature | ||
| if (event.type === 'result') { | ||
| finalResult = typeof event.result === 'string' ? event.result : ''; | ||
| } |
There was a problem hiding this comment.
Streaming mode fails to detect loop completion markers
High Severity
In streaming mode, finalResult is only populated from event.type === 'result' events, but the <loop-complete> and <loop-blocked> markers that Claude outputs appear in assistant event text content. The handleStreamEvent function displays this text via callbacks but doesn't accumulate it. When parseCompletion(finalResult, exitCode) runs on close, the markers won't be found, causing the loop to continue running when it should have stopped at 'all_complete' or 'blocked' status.
Additional Locations (1)
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| ) | ||
| ); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Spawned process not killed when stdout check fails
Medium Severity
In executeVerboseIteration, when child.stdout is unexpectedly null, the code returns early with an error but does not kill the spawned child process. The process was already started at line 403 via spawn(). Without calling child.kill() before returning, the Claude or Docker process continues running as an orphan, causing a resource leak.
Summary
--streamflag for real-time output display using Claude'sstream-jsonformat--no-outputflag to exclude full output from iteration results (saves memory)Changes
New Features
task-master loop --streamdisplays Claude's output in real-time as it generates--no-outputprevents storing large output in iteration resultsBug Fixes & Improvements
errorandcloseevents (prevent multiple promise resolutions)child.stdoutbefore attaching listenersendevent handling--stream+--sandboxcombinationTest plan
task-master loop --stream -n 1task-master loop --no-output -n 1task-master loop --stream --sandbox(should error)Note
Introduces real-time loop streaming and configurable output retention, plus robustness and UX improvements across CLI and core.
-v, --verboseto stream Claude output (thinking, tool calls) and--no-outputto skip storing full output; defaultsincludeOutputto true; surfaces errors via callbacks; shows iteration progress and final error message when present; auto-includes authbriefin progress fileLoopConfig/LoopResultwithincludeOutput,verbose,brief,callbacks, and optionalerrorMessage;LoopServiceimplements verbose streaming viaspawnand--output-format stream-json, fixes race conditions, validates incompatibleverbose + sandbox, centralizes error reporting, updates progress header, and removestasks.jsonfrom context headerapps/extensiondependency totask-master-ai: "*"Written by Cursor Bugbot for commit b428fdb. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Bug Fixes
Chores / Docs
✏️ Tip: You can customize this high-level summary in your review settings.