feat: cluster detection and execution pipeline visualization#1628
feat: cluster detection and execution pipeline visualization#1628Crunchyman-ralph merged 8 commits intonextfrom
Conversation
Implement comprehensive PR creation infrastructure that enables creating GitHub pull requests per execution cluster with full traceability and metadata. Components added: - GitHubPRService: Core PR creation service using gh CLI - PRBodyFormatter: Reusable PR body generator with run metadata - ClusterPRIntegration: Integration layer for workflow orchestration - autopilot_create_pr MCP tool: API endpoint for PR creation Features: - Cluster-to-PR mapping for traceability - Comprehensive PR bodies with subtasks, test results, and commits - Dry-run mode for validation - Auto-merge and draft PR support - Activity logging for all PR operations - Full test coverage for all components This enables the workflow to automatically create PRs when clusters complete execution, linking them back to tasks and providing clear audit trails. Co-Authored-By: Claude <noreply@anthropic.com>
…cution Implement comprehensive cluster execution infrastructure enabling automatic detection and execution of tasks in parallel within clusters and sequentially across clusters. This implementation provides: 1. **Cluster Detection Engine (TAS-150)**: - Topological sort algorithm for dependency analysis - Automatic grouping of tasks into execution clusters by level - Circular dependency detection with clear error reporting - Cluster metadata generation with upstream/downstream relationships - Efficient caching with invalidation support 2. **Parallel Task Execution (TAS-153)**: - Worker pool pattern for concurrent task execution within clusters - Resource constraint management (max concurrent tasks, timeouts) - Task isolation with proper error boundaries - Real-time progress tracking with event emission - Graceful failure handling and task cancellation 3. **Sequential Cluster Execution (TAS-156)**: - Cluster-to-cluster transition orchestration - Dependency-aware cluster readiness validation - Automatic blocking of downstream clusters on failure - Retry logic with configurable backoff - Continue-on-failure option for resilient execution 4. **Real-time Progress Tracking (TAS-159)**: - Event-driven progress updates (non-blocking) - Task and cluster status tracking with timestamps - Execution timeline generation - Checkpoint persistence for resumability - Atomic state saves with temp-file-rename pattern 5. **Tag-level Execution Orchestration (TAS-160)**: - Tag context management with cluster sequencing - Tag dependency resolution and readiness validation - Checkpoint creation at tag boundaries - Resume support from any cluster checkpoint 6. **Project-level Execution Orchestration (TAS-161)**: - Multi-tag execution with topological sort - Project dependency graph management - Automatic blocking of downstream tags on failure - Comprehensive project execution metrics Key Features: - Deterministic execution order (same graph = same order) - Handles edge cases: single-task clusters, deeply nested dependencies (100+ levels), wide clusters (50+ tasks), mixed tag/task dependencies - Efficient progress computation with caching - Full event system for UI integration - Comprehensive unit tests for all services Co-Authored-By: Claude <noreply@anthropic.com>
…execution order Users can define dependencies between tasks and tags, then visualize the complete dependency structure to understand execution order and parallelism opportunities. Task-ID: affcce9e-8973-4138-9fea-b42d617177d9 Brief-ID: 50a9c80b-a71c-41fe-92ab-c1c043bbb997
… tests
Address code review findings across the cluster, git, and workflow modules:
- Add event listener failure tracking with escalation at 3 consecutive failures
- Apply immutable update patterns for execution context and progress state
- Add NaN guards, missing-reference warnings, and structured error codes
- Align status values ('completed' → 'done') with ClusterStatus type
- Rename unit test files from .test.ts to .spec.ts per project conventions
- Add 105 new tests for progress-tracker, project-orchestrator, and tag-orchestrator
- Fix Zod v4 z.record() call in MCP create-pr tool
- Remove unused imports and self-evident comments
Tags can now express dependencies on other tags via `dependsOn` stored in tag metadata. This enables dependency-aware clustering, execution ordering, and visualization through the existing `clusters` command. - Add `dependsOn?: string[]` to TagInfo interface - Add addTagDependency/removeTagDependency/getTagDependencies to IStorage - Implement in FileStorage (atomic via modifyJson) and ApiStorage (stubs) - TagService validates: both tags exist, no self-deps, circular dep DFS - Expose on TasksDomain facade - Replace hardcoded test deps in clusters command with real stored deps - Add `tags add-dep`, `tags remove-dep`, `tags deps` CLI subcommands - Integration tests covering add/remove/circular/persistence (19 tests)
Surface cluster execution order directly in tm list as a visual boxen component with per-cluster progress bars, lane-based turn layout, and parallel group indicators.
🦋 Changeset detectedLatest commit: 79ca917 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a cluster execution subsystem (detection, sequencing, parallel executor, progress tracker, orchestrators), inter-tag dependency storage and APIs, CLI integration (new Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as "tm clusters"
participant Storage as FileStorage
participant Detection as ClusterDetectionService
participant Renderer as "Mermaid/ASCII/JSON Renderer"
User->>CLI: run `tm clusters --tag=<tag>`
CLI->>Storage: getTagsWithStats / get tasks for tag
Storage-->>CLI: tasks[]
CLI->>Detection: detectClusters(tasks)
Detection->>Detection: build graph, topo-sort, detect cycles
alt circular dependency
Detection-->>CLI: CircularDependencyResult
CLI->>Renderer: render circular dependency guidance
Renderer-->>User: display
else success
Detection-->>CLI: ClusterDetectionResult
CLI->>Renderer: render pipeline / mermaid / JSON
Renderer-->>User: display visualization
end
sequenceDiagram
participant Orchestrator as ProjectOrchestratorService
participant Storage as FileStorage
participant TagOrch as TagOrchestratorService
participant Sequencer as ClusterSequencerService
participant Executor as ParallelExecutorService
participant Tracker as ProgressTrackerService
Orchestrator->>Storage: get tag dependencies
Storage-->>Orchestrator: tag deps
Orchestrator->>TagOrch: executeTag(tagA)
TagOrch->>Sequencer: executeClusters(detection, executor)
Sequencer->>Executor: executeCluster(clusterX)
Executor->>Executor: run tasks (bounded concurrency)
Executor-->>Sequencer: ClusterExecutionResult
Sequencer->>Tracker: emit task/cluster events
Tracker-->>Orchestrator: progress:updated
Sequencer-->>TagOrch: sequencer result
TagOrch-->>Orchestrator: TagExecutionResult
Orchestrator->>Orchestrator: update downstream readiness / continue
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
packages/tm-core/src/modules/cluster/services/cluster-sequencer.service.ts
Outdated
Show resolved
Hide resolved
| levels.set(cluster.level, group); | ||
| } | ||
| return levels; | ||
| } |
There was a problem hiding this comment.
Duplicated groupClustersByLevel function across two files
Low Severity
groupClustersByLevel is implemented identically in both cluster-pipeline.component.ts (standalone function) and clusters.command.ts (private method). Both accept readonly ClusterMetadata[] and return the same Map<number, ClusterMetadata[]>. This duplication increases maintenance burden — a bug fix in one location could easily be missed in the other.
Additional Locations (1)
packages/tm-core/src/modules/cluster/services/progress-tracker.service.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Fix all issues with AI agents
In `@apps/cli/src/ui/components/cluster-pipeline.component.ts`:
- Around line 47-55: If total can be zero, the current calculation doneBlocks =
Math.round((done / total) * displayCount) can produce NaN; guard against
division by zero by short-circuiting when total === 0 so doneBlocks and
remainingBlocks are set to 0 and bar becomes an empty/zero-state render (or a
suitable placeholder). Update the logic around
displayCount/doneBlocks/remainingBlocks/bar in cluster-pipeline.component.ts to
check total === 0 before computing (done / total) and compute doneBlocks = 0,
remainingBlocks = displayCount (or 0 if displayCount is 0) in that branch so no
division occurs and bar renders predictably.
In `@apps/cli/src/utils/pager.ts`:
- Around line 44-55: The fallback when calling spawnSync('less', ...) currently
only checks result.status and misses the case where less is unavailable
(result.error set and result.status === null); update the condition around
spawnSync usage (reference: spawnSync and result variables) to treat
result.error truthy OR (result.status !== 0 && result.status !== null) as a
failure and invoke the fallback. Replace direct console.log calls in this module
with the central logging utility used elsewhere (avoid using console.log in the
blocks that print content and errors). Finally, add complete JSDoc for the
visibleLength function including `@param` and `@return` tags describing inputs and
the returned visible character length to satisfy documentation requirements.
In `@package.json`:
- Around line 124-125: Remove the workspace-specific dependency
"beautiful-mermaid" from the root package.json's "dependencies" section and
ensure it remains declared only in the CLI workspace package.json; update the
root package.json by deleting the "beautiful-mermaid": "^0.1.3" entry from its
dependencies block and run your workspace install (or regenerate lockfile) so
the workspace state stays consistent.
In `@packages/tm-core/src/modules/cluster/services/cluster-sequencer.service.ts`:
- Around line 319-326: executeCluster currently maps a failed result to
'blocked' while executeClusters uses 'failed', causing inconsistent cluster
status handling; update the finalStatus determination in executeCluster (where
finalStatus is set from result.success) to use 'failed' instead of 'blocked' and
ensure the call to clusterDetector.updateClusterStatus(detection, clusterId,
finalStatus) then consistently reports failures as 'failed' to match
executeClusters.
In `@packages/tm-core/src/modules/cluster/services/parallel-executor.service.ts`:
- Around line 245-260: The timeout Promise created in
parallel-executor.service.ts leaks its setTimeout when executor(task) resolves
first; modify the race to track the timer id (or use an AbortController) so that
when the executor completes (the winning promise) you clearTimeout(timerId) (or
call abort()) to cancel the pending timeout; update the timeoutPromise creation
and the Promise.race handling around executor(task) and timeoutPromise to ensure
the timer is cleared in the success path and that the TaskMasterError reject
logic remains unchanged.
In `@packages/tm-core/src/modules/cluster/services/progress-tracker.service.ts`:
- Around line 262-268: In progress-tracker.service.ts the failedTasks and
blockedTasks counters both filter taskProgress for t.status === 'blocked';
update the failedTasks calculation (in the method where failedTasks is computed)
to filter for the actual failure status (e.g., t.status === 'failed' or the
corresponding TaskStatus.FAILED enum used elsewhere) so failedTasks counts
failed tasks while blockedTasks remains t.status === 'blocked'.
In `@packages/tm-core/src/modules/cluster/services/tag-cluster.service.ts`:
- Around line 58-71: The cycle guard in getLevel silently returns 0 when
visited.has(tag), hiding dependency cycles; change this to surface the cycle
(consistent with ClusterDetectionService) by throwing an Error that includes the
cycle path (build the path from the visited set plus the current tag) or by
adding the cycle info into the result object so callers can handle it; update
getLevel, levelMap, depMap usage and any callers to expect a thrown Error or a
cycles field in the result accordingly.
- Around line 86-90: The dependsOn computation currently assumes contiguous
numeric levels; instead derive dependencies by inspecting actual tag-to-tag
dependencies: for each cluster (use TagCluster objects built from sortedLevels
and grouped), iterate its tags and collect the levels of all their dependency
tags from the original tag dependency structure (e.g., the tag->dependency list
or adjacency map used earlier), map those dependent tags to their cluster levels
as present in sortedLevels/grouped, dedupe and exclude the cluster's own level,
sort the resulting level list, and assign that array to dependsOn (so clusters
reflect real cross-level dependencies rather than just level-1).
In `@packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts`:
- Around line 81-88: The constructor attaches an event listener to
this.progressTracker which becomes orphaned when executeTag replaces
this.progressTracker with a new ProgressTracker; update executeTag (or the code
path where this.progressTracker is reassigned) to re-wire the same listener to
the newly created progressTracker instance (i.e., call the same addEventListener
logic used in the constructor for the new this.progressTracker) so events from
the replacement progressTracker are emitted via emitEvent and handled by
progressTracker.handleEvent; reference the constructor wiring
(this.progressTracker.addEventListener(...)), the executeTag method where
this.progressTracker is reassigned, and the methods emitEvent and
progressTracker.handleEvent when making the fix.
In `@packages/tm-core/src/modules/git/services/github-pr.service.ts`:
- Around line 328-388: The createPRViaGhCLI function currently builds a shell
command string and calls execAsync, which is vulnerable to injection; replace
the string-building and execAsync call with a direct execFile-style invocation
that runs the "gh" binary with the args array (the same args constructed in this
method, including '--label' and joined labels, and conditional '--draft') and
pass { cwd: this.projectRoot } so the subprocess receives arguments directly (no
shell interpolation), then continue to parse stdout for the PR URL/number as
before; update error handling to surface the execFile error message the same way
the current catch block does.
In `@packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts`:
- Around line 100-111: The code builds a PRClusterInput with branchName
defaulting to '' which later fails validation in createPR; add an explicit
precondition check where clusterInput is created (in cluster-pr-integration.ts)
to validate branchName and throw a clear error if neither branchName nor
workflowContext.branchName is provided (e.g., "branchName is required for
createPR"), or alternatively update the ClusterCompletionEvent type to require
branchName and adjust callers; reference the PRClusterInput construction and
createPR usage to locate where to add the validation/throw so downstream
validation never receives an empty string.
🧹 Nitpick comments (17)
packages/tm-core/src/modules/git/services/github-pr.service.spec.ts (1)
40-54: Tests lack assertions on the actual generated content.The test is named "should generate PR title with conventional commit format" but only asserts
result.success. Consider verifying thatresult.title(or equivalent field fromprResult) actually contains the expected conventional commit format or task ID.Similar concern applies to other title/body generation tests (lines 56-99, 198-246, 248-331) where the test names describe specific behavior but assertions only check success flags.
packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts (1)
217-240: Direct mutation ofcontext.metadatamay cause unintended side effects.The method mutates the passed
WorkflowContextobject directly. If the context is shared across multiple handlers or expected to be immutable, this could lead to race conditions or unexpected state changes.Consider returning the updated metadata and letting the caller decide how to apply it, or documenting that this method intentionally mutates the context.
packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.ts (1)
259-279: Accessing private properties via bracket notation couples tests to implementation details.Tests like
integration['options'].dryRunbypass TypeScript's access modifiers. If the internal structure changes, these tests break even if the public API behavior remains correct.Consider adding public getter methods for testable state, or testing through observable behavior (e.g., verifying PR creation behavior changes when options change).
apps/cli/src/command-registry.ts (1)
126-131: Consider grouping with other task management commands.The
clusterscommand is placed between 'context' (auth category) and 'tags' (task category). For better discoverability in help output, consider moving it closer to other task analysis commands like 'list', 'show', or 'next' in lines 43-74.packages/tm-core/src/modules/cluster/services/parallel-executor.service.spec.ts (2)
392-396: Weak assertion may mask test failures.The assertion
expect(count).toBeGreaterThanOrEqual(0)will always pass since count cannot be negative. The comment on Line 394 acknowledges the race condition, but this effectively makes the test a no-op.Consider using a longer delay or restructuring to reliably capture the in-progress state:
💡 Suggested improvement
- // Wait a bit for execution to start - await new Promise((resolve) => setTimeout(resolve, 10)); - - // Should have 1 active execution (note: this might be 0 if task completes quickly) - const count = service.getActiveExecutionCount(); - expect(count).toBeGreaterThanOrEqual(0); + // Wait for execution to start + await new Promise((resolve) => setTimeout(resolve, 20)); + + // Task has 100ms delay, so should still be active + const count = service.getActiveExecutionCount(); + expect(count).toBe(1);
403-411: Event listener test doesn't verify listener was working before removal.The test only checks that a removed listener isn't called, but doesn't verify the listener would have been called if not removed. Consider adding an assertion that demonstrates the listener works when added.
💡 Suggested improvement
describe('event listeners', () => { - it('should add and remove event listeners', () => { + it('should add and remove event listeners', async () => { const listener = vi.fn(); + const tasks: Task[] = [{ + id: '1', + title: 'Task 1', + description: '', + status: 'pending', + priority: 'medium', + dependencies: [], + details: '', + testStrategy: '', + subtasks: [] + }]; + const cluster: ClusterMetadata = { + clusterId: 'cluster-0', + level: 0, + taskIds: ['1'], + upstreamClusters: [], + downstreamClusters: [], + status: 'ready' + }; + const executor = vi.fn(async () => ({ + taskId: '1', + success: true, + startTime: new Date(), + endTime: new Date(), + duration: 10 + })); service.addEventListener(listener); + await service.executeCluster(cluster, tasks, executor); + expect(listener).toHaveBeenCalled(); + + listener.mockClear(); service.removeEventListener(listener); - - // Listener should not be called after removal - expect(listener).not.toHaveBeenCalled(); + await service.executeCluster(cluster, tasks, executor); + expect(listener).not.toHaveBeenCalled(); }); });apps/mcp/src/tools/autopilot/create-pr.tool.ts (1)
155-156: Hardcoded activity log path reduces flexibility.The path
.taskmaster/state/activity.jsonlis hardcoded. Consider making this configurable via the schema or deriving it from projectRoot:💡 Suggested improvement
+const DEFAULT_ACTIVITY_LOG_PATH = '.taskmaster/state/activity.jsonl'; + const CreatePRSchema = z.object({ // ... existing fields ... + activityLogPath: z + .string() + .optional() + .default(DEFAULT_ACTIVITY_LOG_PATH) + .describe('Path to activity log file (relative to project root)'), metadata: zThen use
args.activityLogPathin the integration config.apps/cli/src/ui/components/cluster-pipeline.component.spec.ts (1)
122-145: Redundant cluster manipulation in test setup.The test creates clusters with
clusterIdoverrides (Lines 124-131), then immediately overwrites the same clusters again (Lines 136-145). This is confusing and redundant.♻️ Simplified test setup
const detection = createDetection([ createCluster(0, ['1']), - { - ...createCluster(1, ['2', '3'], { clusterId: 'cluster-1a' }), - clusterId: 'cluster-1a' - }, - { - ...createCluster(1, ['4'], { clusterId: 'cluster-1b' }), - clusterId: 'cluster-1b' - }, - createCluster(2, ['4']) + createCluster(1, ['2', '3'], { clusterId: 'cluster-1a', upstreamClusters: ['cluster-0'] }), + createCluster(1, ['4'], { clusterId: 'cluster-1b', upstreamClusters: ['cluster-0'] }), + createCluster(2, ['5'], { upstreamClusters: ['cluster-1a', 'cluster-1b'] }) ]); - - // Ensure two clusters have the same level for parallel - detection.clusters[1] = { - ...detection.clusters[1], - level: 1, - clusterId: 'cluster-1a' - }; - detection.clusters[2] = { - ...detection.clusters[2], - level: 1, - clusterId: 'cluster-1b' - };packages/tm-core/src/modules/tasks/services/tag.service.ts (1)
350-361: Consider validatingdependsOntag existence before removal.The method validates that
tagexists but doesn't check ifdependsOnexists. While the storage layer might handle this gracefully, explicit validation would provide clearer error messages to users attempting to remove non-existent dependencies.💡 Optional validation
async removeTagDependency(tag: string, dependsOn: string): Promise<void> { const allTags = await this.storage.getAllTags(); if (!allTags.includes(tag)) { throw new TaskMasterError( `Tag "${tag}" does not exist`, ERROR_CODES.NOT_FOUND, { tagName: tag } ); } + if (!allTags.includes(dependsOn)) { + throw new TaskMasterError( + `Dependency tag "${dependsOn}" does not exist`, + ERROR_CODES.NOT_FOUND, + { tagName: dependsOn } + ); + } await this.storage.removeTagDependency(tag, dependsOn); }apps/cli/src/commands/clusters.command.ts (1)
106-109: Consider throwing errors instead of process.exit for testability.Using
process.exit(1)directly (lines 109, 136) makes the command harder to test in isolation. Consider throwing an error that the CLI framework can catch and convert to an exit code, or use Commander'serror()method.♻️ Optional refactor for improved testability
if (detection.hasCircularDependencies) { this.renderCircularDependencyError(detection); - process.exit(1); + throw new Error('Circular dependency detected'); }} else if (options.diagram) { console.error(chalk.red(`Unsupported diagram type: ${options.diagram}`)); console.error(chalk.gray('Supported types: mermaid, mermaid-raw')); - process.exit(1); + throw new Error(`Unsupported diagram type: ${options.diagram}`); } else {The
executeCommandwrapper already has a try/catch that callsdisplayError, so thrown errors will be handled appropriately.Also applies to: 134-136
packages/tm-core/src/modules/cluster/services/cluster-detection.service.ts (2)
29-31: Consider adding cache size limit or TTL to prevent unbounded memory growth.The
cacheMap can grow indefinitely in long-running processes. Consider adding a maximum size limit or TTL-based eviction strategy.💡 Example: LRU-style cache with size limit
export class ClusterDetectionService { private logger = getLogger('ClusterDetectionService'); - private cache: Map<string, ClusterDetectionResult> = new Map(); + private cache: Map<string, ClusterDetectionResult> = new Map(); + private static readonly MAX_CACHE_SIZE = 100; + + private addToCache(key: string, result: ClusterDetectionResult): void { + if (this.cache.size >= ClusterDetectionService.MAX_CACHE_SIZE) { + const firstKey = this.cache.keys().next().value; + if (firstKey) this.cache.delete(firstKey); + } + this.cache.set(key, result); + }
231-249: Consider optimizing the dependent lookup for large graphs.The inner loop at Line 231 iterates over the entire graph for each dequeued node, resulting in O(V×E) complexity. For large task graphs, consider building an adjacency list of dependents upfront for O(V+E) overall complexity.
packages/tm-core/src/modules/cluster/types.ts (1)
162-186: Consider addingblockedTasksto TagExecutionContext for consistency.
TagExecutionContextandProjectExecutionContextuse inline status unions ('pending' | 'in-progress' | 'done' | 'failed') which differ fromClusterStatus. Consider reusingClusterStatusor creating a shared execution status type for consistency.packages/tm-core/src/modules/cluster/services/parallel-executor.service.ts (1)
320-330: Abort signal not wired to executor - stopping tasks is a no-op.As documented in the
TaskExecutionContextcomment, callingstopTasktriggersabort()but the signal isn't passed to the executor, so tasks won't actually stop. Consider either wiring the signal or documenting this limitation in the method's JSDoc.📝 Add JSDoc clarification
/** * Stop a running task + * `@remarks` Currently signals abort intent but the executor must check the signal. + * If the executor ignores abort signals, the task will run to completion. */ async stopTask(taskId: string): Promise<void> {packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts (1)
262-267: Dead code:progressTrackernull check is always false.The constructor always initializes
this.progressTracker, so the conditionif (!this.progressTracker)at Line 262 is always false. However, this becomes relevant if the class is refactored to lazy-initialize the tracker.packages/tm-core/src/modules/cluster/services/project-orchestrator.service.ts (2)
145-153: Semantic mismatch: emitting'cluster:blocked'for tag-level blocking.When a tag is blocked due to unsatisfied dependencies, the code emits a
'cluster:blocked'event. Consider adding a'tag:blocked'event type for clarity, or document that cluster events are reused for tag-level states.
240-246: Progress percentage may report 100% before completion.The percentage calculation
((i + 1) / sortedTags.length) * 100reports progress based on loop iteration, not actual completion. If the last tag fails, percentage still shows 100% in the event emitted before the final result.💡 Consider using completedTags count instead
- percentage: sortedTags.length > 0 ? ((i + 1) / sortedTags.length) * 100 : 0 + percentage: sortedTags.length > 0 ? (completedTags.size / sortedTags.length) * 100 : 0
| // -R: interpret ANSI colors | ||
| // -S: don't wrap lines (enable horizontal scroll) | ||
| // -F: quit immediately if content fits on one screen | ||
| const result = spawnSync('less', ['-RSF'], { | ||
| input: content + '\n', | ||
| stdio: ['pipe', 'inherit', 'inherit'] | ||
| }); | ||
|
|
||
| // If less failed or isn't available, print directly | ||
| if (result.status !== 0 && result.status !== null) { | ||
| console.log(content); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the full pager.ts file
cat -n apps/cli/src/utils/pager.tsRepository: eyaltoledano/claude-task-master
Length of output: 1989
🏁 Script executed:
# Check if logging utility is imported and used elsewhere in the file
head -30 apps/cli/src/utils/pager.tsRepository: eyaltoledano/claude-task-master
Length of output: 989
🏁 Script executed:
# Search for other utility files to understand the logging pattern
find apps/cli/src -name "*log*" -o -name "*Logger*" | head -10Repository: eyaltoledano/claude-task-master
Length of output: 197
🏁 Script executed:
# Check the config-manager and logging utilities
fd -e ts -e js "log\.(ts|js)|config-manager" apps/cli/src/Repository: eyaltoledano/claude-task-master
Length of output: 118
Fix spawn failure handling to ensure fallback output when less is unavailable.
When less is unavailable, spawnSync sets result.error and result.status === null. The current condition only checks status, which evaluates to false when null, so the fallback never executes. Check result.error in addition to status values.
Additionally, this utility module uses console.log directly (lines 40, 54) instead of the central logging utility, and the visibleLength function lacks complete JSDoc documentation (@param and @return tags required for utility functions).
Spawn failure fix
- // If less failed or isn't available, print directly
- if (result.status !== 0 && result.status !== null) {
+ // If less failed or isn't available, print directly
+ if (result.error || result.status === null || result.status !== 0) {
console.log(content);
}🤖 Prompt for AI Agents
In `@apps/cli/src/utils/pager.ts` around lines 44 - 55, The fallback when calling
spawnSync('less', ...) currently only checks result.status and misses the case
where less is unavailable (result.error set and result.status === null); update
the condition around spawnSync usage (reference: spawnSync and result variables)
to treat result.error truthy OR (result.status !== 0 && result.status !== null)
as a failure and invoke the fallback. Replace direct console.log calls in this
module with the central logging utility used elsewhere (avoid using console.log
in the blocks that print content and errors). Finally, add complete JSDoc for
the visibleLength function including `@param` and `@return` tags describing inputs
and the returned visible character length to satisfy documentation requirements.
packages/tm-core/src/modules/cluster/services/cluster-sequencer.service.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/parallel-executor.service.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/tag-cluster.service.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/parallel-executor.service.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In `@apps/mcp/src/tools/autopilot/create-pr.tool.ts`:
- Around line 173-198: The success payload is being passed directly as result to
handleApiResult, but handleApiResult serializes result.data only; change the
call that currently does handleApiResult({ result: response, ... }) to instead
pass the payload wrapped under data (e.g., handleApiResult({ result: { success:
true, data: response }, log, projectRoot })), leaving the constructed response
object (variables: response, finalClusterId, finalBranchName, baseBranch,
workflowContext) unchanged.
In `@packages/tm-core/src/modules/cluster/services/cluster-detection.service.ts`:
- Around line 353-356: getClusterTasks() currently builds taskMap from allTasks
and then looks up cluster.taskIds, which misses subtask IDs like "1.2" and
silently drops them; update the lookup to resolve dotted subtask IDs to their
parent Task: when mapping cluster.taskIds, try taskMap.get(taskId) first, then
if undefined and taskId contains a dot, derive parentId = taskId.split('.')[0]
and use taskMap.get(parentId); if still undefined, throw or log an explicit
error. Reference: getClusterTasks(), taskMap, cluster.taskIds, allTasks, Task.
In `@packages/tm-core/src/modules/cluster/services/parallel-executor.service.ts`:
- Around line 272-304: The code emits an unsupported status value 'failed' which
breaks downstream trackers; update the two emitEvent calls in this method (the
one that uses finalResult and the catch block) to use an allowed TaskStatus
instead—e.g., replace status: 'failed' with status: 'cancelled' (or 'blocked' if
you prefer blocked semantics) and ensure any surrounding logic that relies on
finalResult.success still sets error and timestamp as before; search for
emitEvent references and adjust the status mapping where finalResult.success ===
false or when catching an exception so only allowed statuses (pending,
in-progress, done, deferred, cancelled, blocked) are emitted.
In
`@packages/tm-core/src/modules/cluster/services/progress-tracker.service.spec.ts`:
- Around line 472-476: The test uses an unnecessary dynamic await import('fs')
inside the async beforeEach, making the hook async; replace this with a
synchronous top-level import of fs and make beforeEach synchronous: import fs at
module top, then in beforeEach assign fsMock = fs.promises as unknown as typeof
fsMock (and remove the eslint-disable and async keyword), keeping the existing
vi.mock('fs') and vi.clearAllMocks() calls and preserving the fsMock variable
and beforeEach hook names.
In
`@packages/tm-core/src/modules/cluster/services/project-orchestrator.service.spec.ts`:
- Around line 100-120: Replace the artificial mock created by
createMockTagOrchestrator and instead instantiate a real TagOrchestratorService
for the ProjectOrchestratorService tests (use TagOrchestratorService constructor
or its factory) so the test exercises internal logic; keep only external I/O
mocked/stubbed (e.g., network/fs interactions used by TagOrchestratorService)
via fixtures or vi.mock for Supabase/APIs/filesystem; remove or repurpose
noopExecutor and any vi.fn() placeholders for internal methods like
addEventListener, removeEventListener, executeTag, isTagReady, stopExecution so
they call the real service behavior while controlling external dependencies.
In
`@packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.spec.ts`:
- Around line 96-124: The test currently replaces internal services with mocks
(createMockDetector, createMockSequencer, createMockProgressTracker) which
violates the tm-core guideline to use real internal services; replace these
factory mocks with actual instances of ClusterDetectionService,
ClusterSequencerService, and ProgressTrackerService (or their concrete classes
used in production) and only mock external I/O those services depend on
(Supabase, external APIs, filesystem). Ensure you construct the real services
with test doubles for external connectors (e.g., pass a mocked Supabase client
or file adapter) so the internal logic in methods like detectClusters,
executeClusters, getProgress, updateClusterStatus, createCheckpoint,
loadCheckpoint runs in tests while preventing real network/filesystem access.
In `@packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts`:
- Around line 264-267: The constructor always creates this.progressTracker so
the current if (!this.progressTracker) guard in executeCluster never runs,
leaving the tracker uninitialized when executeCluster is called directly; update
executeCluster to ensure the ProgressTrackerService is initialized with the
passed detection by either (a) removing the existence guard and calling
this.progressTracker.initialize(detection) when needed, or (b) add an explicit
initialized check/flag on ProgressTrackerService and call
this.progressTracker.initialize(detection) when not yet initialized; reference
ProgressTrackerService.initialize, the this.progressTracker instance, and the
executeCluster/executeTag call paths to determine where to perform the
initialization so checkpoints and progress behave correctly.
In `@packages/tm-core/src/modules/git/services/github-pr.service.ts`:
- Around line 205-228: handleDryRun currently inserts mappings with placeholder
values (prUrl: '' and prNumber: 0) into the ClusterPRMapping store which makes
getClusterPRMapping/getAllClusterPRMappings consumers misinterpret dry-run
entries; update handleDryRun to mark dry-run entries explicitly by either (a)
omitting storage entirely, (b) setting prUrl and prNumber to undefined instead
of empty/zero, or (c) adding a top-level dryRun: boolean to ClusterPRMapping and
setting it to true—pick one approach and apply it consistently (update the
ClusterPRMapping type if adding dryRun) and ensure
getClusterPRMapping/getAllClusterPRMappings behavior remains clear for callers.
🧹 Nitpick comments (5)
packages/tm-core/src/modules/git/services/github-pr.service.spec.ts (1)
37-51: Tests lack assertions on generated content.Multiple tests for PR title generation (lines 37-51, 196-243) and PR body generation (lines 53-96, 246-328) only verify
result.success === truewithout asserting the actual generated title or body content. This makes the tests pass even if the generation logic is broken.Consider asserting on the generated content. For example, if
createPRreturned the generated title/body in dry-run mode, you could verify:
- Title contains expected task ID, tag, or cluster ID
- Body includes subtasks, test results when provided
Example improvement for title verification
If the
PRCreationResultincluded the generated title/body in dry-run mode, tests could verify:it('should generate title with task ID', async () => { const cluster: PRClusterInput = { clusterId: 'cluster-1', branchName: 'feature/test', taskId: 'TAS-123', tag: 'auth' }; const result = await service.createPR({ cluster, dryRun: true }); expect(result.success).toBe(true); // Add content verification: // expect(result.generatedTitle).toContain('TAS-123'); // expect(result.generatedTitle).toContain('auth'); });Alternatively, test the title/body generation methods directly if they were exposed or extracted.
packages/tm-core/src/modules/git/services/pr-body-formatter.ts (3)
299-307: Locale-dependent timestamp formatting may produce inconsistent PR bodies.
toLocaleString()at line 302 uses the runtime's locale and timezone settings. This means the same commit could show different timestamps depending on where the code runs (CI vs local dev machine, different regions).Consider using a fixed format like ISO 8601 or a consistent UTC-based format for reproducibility.
Example fix using ISO format
- const timestamp = commit.timestamp - ? ` (${new Date(commit.timestamp).toLocaleString()})` - : ''; + const timestamp = commit.timestamp + ? ` (${new Date(commit.timestamp).toISOString()})` + : '';Or for a more readable UTC format:
const timestamp = commit.timestamp ? ` (${new Date(commit.timestamp).toUTCString()})` : '';
337-339: Same locale-dependent formatting issue.Line 338 also uses
toLocaleString()for the start time. Apply the same consistent formatting as suggested for commit timestamps.
51-53: UnusedactivityLogoption.
activityLog?: ActivityEvent[]is defined inPRBodyFormatterOptionsbut never used in theformat()method. If this is reserved for future use, consider adding a comment. Otherwise, remove it to keep the interface clean.packages/tm-core/src/modules/cluster/services/cluster-detection.service.ts (1)
111-119: Centralize subtask ID formatting instead of inline string concatenation.
${taskId}.${subtask.id}and dependency normalization are task ID formatting logic baked into the service. Please route this through a shared formatter to keep ID rules consistent and avoid drift across modules. As per coding guidelines, "Do not duplicate task ID formatting logic across modules - centralize formatting utilities".
packages/tm-core/src/modules/cluster/services/cluster-detection.service.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/parallel-executor.service.ts
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/progress-tracker.service.spec.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/project-orchestrator.service.spec.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.spec.ts
Outdated
Show resolved
Hide resolved
packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/clusters.command.ts`:
- Around line 99-125: Replace the direct console.log calls in this block with
the project's central logging/display utility (the same pipeline used by
displayCommandHeader) so output is routed through the standard CLI logger;
specifically, wherever console.log(...) prints the "No tasks found." message and
the Cluster Detection header (the calls after detection and
displayCommandHeader), call the central logger/display function instead,
preserving chalk formatting and message text, and keep existing control flow
around ClusterDetectionService/detection, renderCircularDependencyError, and
renderJson unchanged.
In `@packages/tm-core/src/modules/cluster/services/progress-tracker.service.ts`:
- Around line 362-386: In loadCheckpoint, after reading and JSON.parse'ing the
file into what is typed as ExecutionCheckpoint, validate the parsed shape
(ensure checkpoint.clusterStatuses and checkpoint.taskStatuses are objects) and
coerce any timestamp strings back to Date instances before applying to
clusterProgress/taskProgress; wrap the parsing/validation in try/catch to
surface JSON/file errors, and only iterate over checkpoint.clusterStatuses and
checkpoint.taskStatuses when they pass validation, converting fields like
status.timestamp (or similarly named timestamp properties) from string -> new
Date(...) for both cluster and task statuses prior to calling
this.clusterProgress.set and this.taskProgress.set.
In `@packages/tm-core/src/modules/tasks/services/tag.service.ts`:
- Around line 371-376: getTagDependencies currently returns [] for unknown tags;
add the same existence check used in removeTagDependency so it throws a
NOT_FOUND when the tag doesn't exist. Specifically, inside async
getTagDependencies(tag: string) call this.storage.getTag(tag) (or reuse the
existence check logic from removeTagDependency) and if falsy throw the same
NotFound error used elsewhere, then return this.storage.getTagDependencies(tag);
ensure you reference getTagDependencies, removeTagDependency,
this.storage.getTag and this.storage.getTagDependencies so callers in
tags.command.ts can distinguish "no deps" vs "tag not found".
🧹 Nitpick comments (9)
packages/tm-core/src/modules/git/services/github-pr.service.ts (2)
385-402: Consider usingexecFileAsyncforenableAutoMergefor consistency.While
prNumberis derived from trusted PR URL parsing and not user input, usingexecFileAsyncwith an args array (likecreatePRViaGhCLIdoes) would maintain consistency and defensive coding practices across the service.♻️ Suggested refactor
private async enableAutoMerge( prNumber: number ): Promise<{ enabled: boolean; error?: string }> { try { - await execAsync(`gh pr merge ${prNumber} --auto --squash`, { + await execFileAsync('gh', ['pr', 'merge', String(prNumber), '--auto', '--squash'], { cwd: this.projectRoot }); logger.info(`Enabled auto-merge for PR #${prNumber}`); return { enabled: true };
266-279: Consider usingexecFileAsyncforvalidateGhCLIas well.For consistency with
createPRViaGhCLI, consider usingexecFileAsync('gh', ['--version'], ...)instead ofexecAsync('gh --version', ...). This would allow removing theexecimport entirely.packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.ts (1)
345-346: Private property access in tests couples to implementation details.Lines 345-346 use
integration['options']to directly inspect internal state. While this works, it couples the test to implementation details. Consider testing through observable behavior instead (e.g., creating a PR and verifying the labels appear in the result).That said, for verifying runtime option updates where behavior isn't otherwise observable without mocking external systems, this approach is pragmatic.
packages/tm-core/src/modules/cluster/services/project-orchestrator.service.ts (4)
72-78: Anonymous listener cannot be removed.The event listener wired at Line 75-77 is anonymous, preventing removal if needed. Consider storing the listener reference if cleanup becomes necessary in the future.
218-240: Progress metrics show partial information during execution.
totalClustersis computed fromtagResults(only processed tags), whiletotalTasksusestagData(all tags). This means early progress events will show incomplete cluster totals until tags are executed. This is acceptable given cluster counts aren't known until detection runs, but consider documenting this asymmetry in the interface JSDoc.
310-318: Variable shadowing:tagDatareused.Line 313 declares
const tagData = tagMap.get(tag)which shadows the parametertagDatafrom Line 292. While functionally correct due to scoping, this is confusing. Rename totagEntryorfoundTag.♻️ Suggested fix
inDegree.forEach((degree, tag) => { if (degree === 0) { - const tagData = tagMap.get(tag); - if (tagData) { - queue.push(tagData); + const tagEntry = tagMap.get(tag); + if (tagEntry) { + queue.push(tagEntry); } } });
409-427: Failing listeners are not auto-removed after threshold.After 3 failures, the listener continues to be called (and fail), escalating to error-level logging each time. Consider auto-removing persistently failing listeners to prevent log flooding.
♻️ Suggested improvement
this.eventListeners.forEach((listener) => { try { listener(event); this.listenerFailureCounts.delete(listener); } catch (error) { const count = (this.listenerFailureCounts.get(listener) || 0) + 1; this.listenerFailureCounts.set(listener, count); if (count >= 3) { this.logger.error('Event listener is repeatedly failing', { failureCount: count, error }); + // Auto-remove persistently failing listener + this.eventListeners.delete(listener); + this.listenerFailureCounts.delete(listener); } else { this.logger.warn('Error in event listener', { error }); } } });packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts (2)
73-91: Constructor listener wiring is superseded byexecuteTag.The constructor wires listeners (Lines 83-90), but
executeTagreplacesprogressTrackerand re-wires at Lines 127-129. The constructor setup only serves initial state beforeexecuteTagruns. This is functional but the dual-setup pattern adds complexity.
389-407: Same listener failure pattern asProjectOrchestratorService.Consistently tracks listener failures but doesn't auto-remove after threshold. Same recommendation applies: consider auto-removing persistently failing listeners.
| console.log(chalk.yellow('\nNo tasks found.')); | ||
| return; | ||
| } | ||
|
|
||
| const detector = new ClusterDetectionService(); | ||
| const detection = detector.detectClusters(result.tasks); | ||
|
|
||
| if (detection.hasCircularDependencies) { | ||
| this.renderCircularDependencyError(detection); | ||
| throw new Error('Circular dependency detected'); | ||
| } | ||
|
|
||
| if (options.json) { | ||
| this.renderJson(detection); | ||
| return; | ||
| } | ||
|
|
||
| displayCommandHeader(this.tmCore!, { | ||
| tag: options.tag || 'master', | ||
| storageType | ||
| }); | ||
|
|
||
| console.log( | ||
| chalk.bold( | ||
| `\nCluster Detection — ${detection.totalTasks} tasks → ${detection.totalClusters} clusters\n` | ||
| ) | ||
| ); |
There was a problem hiding this comment.
Route output through the central logging/display utility instead of console.log.
This file introduces new direct console.log calls for user-facing output; guidelines require using the central log/display utility instead. Please align the new command with the standard output pipeline used elsewhere in the CLI.
As per coding guidelines: "Do not add direct console.log calls outside the logging utility - use the central log function instead."
🤖 Prompt for AI Agents
In `@apps/cli/src/commands/clusters.command.ts` around lines 99 - 125, Replace the
direct console.log calls in this block with the project's central
logging/display utility (the same pipeline used by displayCommandHeader) so
output is routed through the standard CLI logger; specifically, wherever
console.log(...) prints the "No tasks found." message and the Cluster Detection
header (the calls after detection and displayCommandHeader), call the central
logger/display function instead, preserving chalk formatting and message text,
and keep existing control flow around ClusterDetectionService/detection,
renderCircularDependencyError, and renderJson unchanged.
| async loadCheckpoint(): Promise<ExecutionCheckpoint | null> { | ||
| if (!this.checkpointPath) return null; | ||
|
|
||
| try { | ||
| const content = await fs.readFile(this.checkpointPath, 'utf-8'); | ||
| const checkpoint = JSON.parse(content) as ExecutionCheckpoint; | ||
|
|
||
| // Restore progress state immutably | ||
| Object.entries(checkpoint.clusterStatuses).forEach(([id, status]) => { | ||
| const cluster = this.clusterProgress.get(id); | ||
| if (cluster) { | ||
| this.clusterProgress.set(id, { | ||
| ...cluster, | ||
| status | ||
| }); | ||
| } | ||
| }); | ||
|
|
||
| Object.entries(checkpoint.taskStatuses).forEach(([id, status]) => { | ||
| const task = this.taskProgress.get(id); | ||
| if (task) { | ||
| this.taskProgress.set(id, { ...task, status }); | ||
| } | ||
| }); | ||
|
|
There was a problem hiding this comment.
Coerce checkpoint timestamps back to Date and validate parsed shape.
JSON.parse converts timestamp to a string; returning it as ExecutionCheckpoint can break consumers expecting Date. Also consider basic shape validation before applying status maps.
🛠️ Possible fix
const content = await fs.readFile(this.checkpointPath, 'utf-8');
-const checkpoint = JSON.parse(content) as ExecutionCheckpoint;
+const checkpoint = JSON.parse(content) as ExecutionCheckpoint;
+const parsedTimestamp =
+ typeof checkpoint.timestamp === 'string'
+ ? new Date(checkpoint.timestamp)
+ : checkpoint.timestamp;
+if (!(parsedTimestamp instanceof Date) || Number.isNaN(parsedTimestamp.getTime())) {
+ throw new Error('Invalid checkpoint timestamp');
+}
+checkpoint.timestamp = parsedTimestamp;As per coding guidelines: "Include error handling for JSON file operations and validate JSON structure after reading."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async loadCheckpoint(): Promise<ExecutionCheckpoint | null> { | |
| if (!this.checkpointPath) return null; | |
| try { | |
| const content = await fs.readFile(this.checkpointPath, 'utf-8'); | |
| const checkpoint = JSON.parse(content) as ExecutionCheckpoint; | |
| // Restore progress state immutably | |
| Object.entries(checkpoint.clusterStatuses).forEach(([id, status]) => { | |
| const cluster = this.clusterProgress.get(id); | |
| if (cluster) { | |
| this.clusterProgress.set(id, { | |
| ...cluster, | |
| status | |
| }); | |
| } | |
| }); | |
| Object.entries(checkpoint.taskStatuses).forEach(([id, status]) => { | |
| const task = this.taskProgress.get(id); | |
| if (task) { | |
| this.taskProgress.set(id, { ...task, status }); | |
| } | |
| }); | |
| async loadCheckpoint(): Promise<ExecutionCheckpoint | null> { | |
| if (!this.checkpointPath) return null; | |
| try { | |
| const content = await fs.readFile(this.checkpointPath, 'utf-8'); | |
| const checkpoint = JSON.parse(content) as ExecutionCheckpoint; | |
| const parsedTimestamp = | |
| typeof checkpoint.timestamp === 'string' | |
| ? new Date(checkpoint.timestamp) | |
| : checkpoint.timestamp; | |
| if (!(parsedTimestamp instanceof Date) || Number.isNaN(parsedTimestamp.getTime())) { | |
| throw new Error('Invalid checkpoint timestamp'); | |
| } | |
| checkpoint.timestamp = parsedTimestamp; | |
| // Restore progress state immutably | |
| Object.entries(checkpoint.clusterStatuses).forEach(([id, status]) => { | |
| const cluster = this.clusterProgress.get(id); | |
| if (cluster) { | |
| this.clusterProgress.set(id, { | |
| ...cluster, | |
| status | |
| }); | |
| } | |
| }); | |
| Object.entries(checkpoint.taskStatuses).forEach(([id, status]) => { | |
| const task = this.taskProgress.get(id); | |
| if (task) { | |
| this.taskProgress.set(id, { ...task, status }); | |
| } | |
| }); |
🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/cluster/services/progress-tracker.service.ts`
around lines 362 - 386, In loadCheckpoint, after reading and JSON.parse'ing the
file into what is typed as ExecutionCheckpoint, validate the parsed shape
(ensure checkpoint.clusterStatuses and checkpoint.taskStatuses are objects) and
coerce any timestamp strings back to Date instances before applying to
clusterProgress/taskProgress; wrap the parsing/validation in try/catch to
surface JSON/file errors, and only iterate over checkpoint.clusterStatuses and
checkpoint.taskStatuses when they pass validation, converting fields like
status.timestamp (or similarly named timestamp properties) from string -> new
Date(...) for both cluster and task statuses prior to calling
this.clusterProgress.set and this.taskProgress.set.
| /** | ||
| * Get all dependencies for a tag | ||
| */ | ||
| async getTagDependencies(tag: string): Promise<string[]> { | ||
| return this.storage.getTagDependencies(tag); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the tag.service.ts file and examine the implementation
find . -type f -name "tag.service.ts" | head -5Repository: eyaltoledano/claude-task-master
Length of output: 135
🏁 Script executed:
# Search for getTagDependencies implementation and calls
rg -n "getTagDependencies" --type=ts -B2 -A5Repository: eyaltoledano/claude-task-master
Length of output: 17332
🏁 Script executed:
# Check storage implementations for tag existence validation
rg -n "getAllTags\|getTagDependencies" --type=ts -B2 -A3 | head -80Repository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Compare how addTag/removeTag handle unknown tags
rg -n "addTag\s*\(|removeTag\s*\(" --type=ts -B2 -A8Repository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Find the loadMetadata implementation in file-storage
rg -n "loadMetadata" --type=ts -B2 -A8 packages/tm-core/src/modules/storage/adapters/file-storage/Repository: eyaltoledano/claude-task-master
Length of output: 3762
🏁 Script executed:
# Check the full test for "should reject when tag does not exist"
sed -n '159,175p' packages/tm-core/tests/integration/storage/tag-dependencies.test.tsRepository: eyaltoledano/claude-task-master
Length of output: 603
🏁 Script executed:
# Look at addTagDependency and removeTagDependency implementations
rg -n "async addTagDependency|async removeTagDependency" --type=ts -A10 packages/tm-core/src/modules/tasks/services/tag.service.tsRepository: eyaltoledano/claude-task-master
Length of output: 781
🏁 Script executed:
# Check if there's a test for getTagDependencies with non-existent tag
rg -n "getTagDependencies.*unknown\|getTagDependencies.*non.*exist\|getTagDependencies.*not.*exist" --type=ts -B2 -A5Repository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Check how extractMetadata handles unknown tags
rg -n "extractMetadata" --type=ts -B2 -A8 packages/tm-core/src/modules/storage/adapters/file-storage/Repository: eyaltoledano/claude-task-master
Length of output: 5354
🏁 Script executed:
# Look for the formatHandler implementation
fd -e "ts" -path "*format*" packages/tm-core/src/modules/storage/adapters/file-storage/Repository: eyaltoledano/claude-task-master
Length of output: 243
🏁 Script executed:
# Find format-handler file
find . -type f -name "format-handler.ts"Repository: eyaltoledano/claude-task-master
Length of output: 153
🏁 Script executed:
# Look at extractMetadataFromLegacy to see what happens for unknown tags
rg -n "extractMetadataFromLegacy|extractMetadataFromStandard" --type=ts -A15 packages/tm-core/src/modules/storage/adapters/file-storage/format-handler.tsRepository: eyaltoledano/claude-task-master
Length of output: 1307
🏁 Script executed:
# Check what getAllTags returns and if there's validation in addTagDependency
cat -n packages/tm-core/src/modules/tasks/services/tag.service.ts | sed -n '314,348p'Repository: eyaltoledano/claude-task-master
Length of output: 1181
🏁 Script executed:
# Check the caller in tags.command.ts to see if it handles missing tags
sed -n '620,640p' apps/cli/src/commands/tags.command.tsRepository: eyaltoledano/claude-task-master
Length of output: 597
🏁 Script executed:
# Check if there are other callers of getTagDependencies
rg -n "getTagDependencies\(" --type=ts -B3 -A3Repository: eyaltoledano/claude-task-master
Length of output: 13978
Add tag existence validation to getTagDependencies for consistency with addTagDependency/removeTagDependency.
getTagDependencies currently returns [] for unknown tags instead of throwing a NOT_FOUND error. This creates an inconsistency: addTagDependency and removeTagDependency validate tag existence and throw errors, but getTagDependencies silently succeeds. The caller in tags.command.ts (line 627) cannot distinguish between "tag has no dependencies" and "tag does not exist," and the internal circular dependency check (line 405) would silently treat missing tags as valid with no dependencies.
Add the same validation present in removeTagDependency (line 351-358) to ensure consistent error handling across all tag dependency operations.
🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/tasks/services/tag.service.ts` around lines 371
- 376, getTagDependencies currently returns [] for unknown tags; add the same
existence check used in removeTagDependency so it throws a NOT_FOUND when the
tag doesn't exist. Specifically, inside async getTagDependencies(tag: string)
call this.storage.getTag(tag) (or reuse the existence check logic from
removeTagDependency) and if falsy throw the same NotFound error used elsewhere,
then return this.storage.getTagDependencies(tag); ensure you reference
getTagDependencies, removeTagDependency, this.storage.getTag and
this.storage.getTagDependencies so callers in tags.command.ts can distinguish
"no deps" vs "tag not found".
b04f2ec to
924dfc9
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In
`@packages/tm-core/src/modules/cluster/services/project-orchestrator.service.spec.ts`:
- Around line 1-7: Run the Biome formatter on the test file to fix formatting
errors (e.g., execute `biome format .`) or apply equivalent formatting changes
to project-orchestrator.service.spec.ts; ensure the file header/comment and
surrounding whitespace conform to Biome rules so the ProjectOrchestratorService
test file passes the format check.
In
`@packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.spec.ts`:
- Around line 1-7: CI failed Biome formatting for the test file containing
TagOrchestratorService tests; run the formatter or apply the formatting changes
so the file complies with Biome rules. Specifically, run `biome format .` (or
format this test file) to fix spacing/indentation and header comment formatting
in the TagOrchestratorService test suite, then stage the updated file and re-run
the pipeline.
In `@packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts`:
- Around line 83-91: The constructor currently registers an anonymous listener
on clusterSequencer which forwards events to this.progressTracker.handleEvent
and this.emitEvent, causing duplicate handling after executeTag adds another
listener; store that constructor listener in a named property (e.g.
this.constructorSequencerListener) instead of an anonymous function, and in
executeTag call
clusterSequencer.removeEventListener(this.constructorSequencerListener) before
registering the new currentSequencerListener (and optionally re-register or
update the stored reference if you need to restore it later); update references
to addEventListener/removeEventListener and the methods clusterSequencer,
progressTracker.handleEvent, currentSequencerListener, and emitEvent
accordingly.
In `@packages/tm-core/src/modules/git/services/github-pr.service.ts`:
- Around line 348-350: The PR creation builds args by joining labels into a
single comma-separated string which relies on pflag parsing; update the call
site in github-pr.service.ts (inside the function that constructs the gh CLI
args where labels are handled) to push one '--label' per label instead of a
single '--label' with labels.join(',') so the args array uses
labels.forEach(label => args.push('--label', label)) semantics; this aligns with
gh pr create documentation and handles labels with commas/spaces correctly.
🧹 Nitpick comments (4)
packages/tm-core/src/modules/cluster/services/cluster-detection.service.ts (1)
330-341: Cache eviction is FIFO, not LRU.The
addToCachemethod evicts the oldest entry by iteration order (this.cache.keys().next().value). JavaScriptMappreserves insertion order, so this is FIFO eviction. If true LRU (least recently used) is desired, the cache hit indetectClusters(line 50) should re-insert the entry to refresh its position. Current behavior is acceptable for most use cases but worth documenting.packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts (1)
326-328:updateOptionswon't updatebaseBranchon the existingprService.The
updateOptionsmethod updatesthis.optionsbut theGitHubPRServicewas instantiated in the constructor with the originalbaseBranch. If callers expectupdateOptions({ baseBranch: 'develop' })to affect subsequent PR creation, they'll be surprised. Consider either recreating the service or documenting this limitation.Possible fix if dynamic baseBranch updates are needed
updateOptions(options: Partial<ClusterPRIntegrationOptions>): void { this.options = { ...this.options, ...options }; + if (options.baseBranch !== undefined) { + this.prService = new GitHubPRService( + this.options.projectRoot, + options.baseBranch + ); + } }packages/tm-core/src/modules/cluster/types.ts (1)
155-166: Note:ExecutionCheckpoint.timestampwill serialize to string.When checkpoints are persisted via
JSON.stringify, thetimestamp: Datewill become a string. Consumers loading checkpoints should coerce it back toDate. Consider documenting this in the interface or using a string type with ISO format convention.packages/tm-core/src/modules/cluster/services/project-orchestrator.service.ts (1)
310-318: Variable shadowing:tagDatashadows the function parameter.The local variable
tagDataon Line 313 shadows the function parameter of the same name, which reduces readability and could cause confusion during maintenance.♻️ Suggested rename to avoid shadowing
inDegree.forEach((degree, tag) => { if (degree === 0) { - const tagData = tagMap.get(tag); - if (tagData) { - queue.push(tagData); + const tagEntry = tagMap.get(tag); + if (tagEntry) { + queue.push(tagEntry); } } });
| /** | ||
| * @fileoverview Tests for ProjectOrchestratorService | ||
| * | ||
| * Uses a real TagOrchestratorService instance (with real internal sub-services) | ||
| * and spies only on `executeTag` to control I/O-boundary results. | ||
| * `isTagReady` and event wiring use real implementations. | ||
| */ |
There was a problem hiding this comment.
Fix Biome formatting to pass CI.
The pipeline indicates Biome format-check failed for this file. Run biome format . to apply the required formatting fixes.
🧰 Tools
🪛 GitHub Actions: CI
[error] 1-1: Biome format-check failed. The formatter would reformat this file. Run 'biome format .' to apply fixes.
🤖 Prompt for AI Agents
In
`@packages/tm-core/src/modules/cluster/services/project-orchestrator.service.spec.ts`
around lines 1 - 7, Run the Biome formatter on the test file to fix formatting
errors (e.g., execute `biome format .`) or apply equivalent formatting changes
to project-orchestrator.service.spec.ts; ensure the file header/comment and
surrounding whitespace conform to Biome rules so the ProjectOrchestratorService
test file passes the format check.
packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.spec.ts
Show resolved
Hide resolved
| this.clusterSequencer.addEventListener((event) => { | ||
| this.progressTracker.handleEvent(event); | ||
| this.emitEvent(event); | ||
| }); | ||
|
|
||
| this.progressTracker.addEventListener((event) => { | ||
| this.emitEvent(event); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Duplicate event handling: constructor's anonymous listener is never removed.
The constructor (Lines 83-86) adds an anonymous listener to clusterSequencer that forwards events to both this.progressTracker.handleEvent and this.emitEvent. In executeTag, a new currentSequencerListener is added (Lines 135-138) that also calls this.progressTracker.handleEvent.
Since this.progressTracker is resolved at runtime, both listeners will call handleEvent on the same (new) progressTracker instance, causing duplicate event processing after executeTag is called.
🐛 Proposed fix: Store constructor's listener for removal
export class TagOrchestratorService {
private logger = getLogger('TagOrchestratorService');
private clusterDetector: ClusterDetectionService;
private clusterSequencer: ClusterSequencerService;
private progressTracker: ProgressTrackerService;
private eventListeners: Set<ProgressEventListener> = new Set();
private listenerFailureCounts: Map<ProgressEventListener, number> = new Map();
private currentSequencerListener?: (event: ProgressEventData) => void;
+ private constructorSequencerListener?: (event: ProgressEventData) => void;
private currentContext?: TagExecutionContext;
constructor(
clusterDetector?: ClusterDetectionService,
clusterSequencer?: ClusterSequencerService,
progressTracker?: ProgressTrackerService
) {
this.clusterDetector = clusterDetector || new ClusterDetectionService();
this.clusterSequencer =
clusterSequencer || new ClusterSequencerService(this.clusterDetector);
this.progressTracker = progressTracker || new ProgressTrackerService();
- this.clusterSequencer.addEventListener((event) => {
- this.progressTracker.handleEvent(event);
- this.emitEvent(event);
- });
+ this.constructorSequencerListener = (event) => {
+ this.progressTracker.handleEvent(event);
+ this.emitEvent(event);
+ };
+ this.clusterSequencer.addEventListener(this.constructorSequencerListener);
this.progressTracker.addEventListener((event) => {
this.emitEvent(event);
});
}Then in executeTag, remove the constructor listener before adding the new pattern:
// Re-wire the progress tracker listener for the new instance
this.progressTracker.addEventListener((event) => {
this.emitEvent(event);
});
+ // Remove constructor's sequencer listener to avoid duplicate handling
+ if (this.constructorSequencerListener) {
+ this.clusterSequencer.removeEventListener(this.constructorSequencerListener);
+ this.constructorSequencerListener = undefined;
+ }
+
if (this.currentSequencerListener) {
this.clusterSequencer.removeEventListener(this.currentSequencerListener);
}Also applies to: 131-138
🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/cluster/services/tag-orchestrator.service.ts`
around lines 83 - 91, The constructor currently registers an anonymous listener
on clusterSequencer which forwards events to this.progressTracker.handleEvent
and this.emitEvent, causing duplicate handling after executeTag adds another
listener; store that constructor listener in a named property (e.g.
this.constructorSequencerListener) instead of an anonymous function, and in
executeTag call
clusterSequencer.removeEventListener(this.constructorSequencerListener) before
registering the new currentSequencerListener (and optionally re-register or
update the stored reference if you need to restore it later); update references
to addEventListener/removeEventListener and the methods clusterSequencer,
progressTracker.handleEvent, currentSequencerListener, and emitEvent
accordingly.
| totalTasks: tasks.length, | ||
| taskToCluster, | ||
| hasCircularDependencies: false | ||
| }; |
There was a problem hiding this comment.
totalTasks undercounts when tasks have subtasks
Medium Severity
totalTasks is set to tasks.length (top-level tasks only), but buildGraph adds both parent tasks and subtask nodes to the dependency graph. The taskToCluster map and cluster taskIds include subtask IDs, so totalTasks doesn't match the actual number of tracked items. The display text (e.g., "1 tasks → 2 clusters") becomes misleading when subtasks with inter-dependencies exist, and ProgressTrackerService tracks more items in taskProgress than totalTasks reflects.
Additional Locations (1)
| totalTasks: tasks.length, | ||
| taskToCluster, | ||
| hasCircularDependencies: false | ||
| }; |
There was a problem hiding this comment.
totalTasks undercounts when tasks have subtasks
Medium Severity
totalTasks is set to tasks.length (top-level tasks only), but buildGraph adds both parent tasks and subtask nodes to the dependency graph. The taskToCluster map and cluster taskIds include subtask IDs, so totalTasks doesn't match the actual number of tracked items. The display text (e.g., "1 tasks → 2 clusters") becomes misleading when subtasks with inter-dependencies exist, and ProgressTrackerService tracks more items in taskProgress than totalTasks reflects.
Additional Locations (1)
924dfc9 to
9ac6f87
Compare
| const result: ClusterDetectionResult = { | ||
| clusters, | ||
| totalClusters: clusters.length, | ||
| totalTasks: tasks.length, |
There was a problem hiding this comment.
totalTasks undercounts when tasks have subtasks
High Severity
detectClusters sets totalTasks to tasks.length (top-level tasks only), but buildGraph adds both tasks and their subtasks as graph nodes, and all end up in cluster taskIds and taskToCluster. When ProgressTrackerService initializes from this result, it creates taskProgress entries for all IDs (including subtask IDs like 1.1, 1.2), but uses the too-small totalTasks as the denominator for percentage calculation. This makes percentage exceed 100% and estimatedTimeRemaining go negative when subtasks complete.
Additional Locations (1)
| /** | ||
| * Register the autopilot_create_pr tool with the MCP server | ||
| */ | ||
| export function registerAutopilotCreatePRTool(server: FastMCP) { |
There was a problem hiding this comment.
New MCP tool not registered in tool registry
Medium Severity
registerAutopilotCreatePRTool is defined and exported from the autopilot module but never imported or added to the toolRegistry map in mcp-server/src/tools/tool-registry.js. All other autopilot tools (abort, commit, complete, finalize, next, resume, start, status) are registered there, but this new one is missing. The autopilot_create_pr MCP tool is completely inaccessible to clients.
Additional Locations (1)
| } | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Event listener management code triplicated across services
Low Severity
ParallelExecutorService, ClusterSequencerService, and ProgressTrackerService each have identical copies of emitEvent, addEventListener, removeEventListener, the eventListeners set, and the listenerFailureCounts map with its failure-counting logic. This is ~40 lines of non-trivial logic repeated three times. A shared base class or mixin would eliminate the duplication and ensure consistent behavior when the listener contract changes.
Additional Locations (2)
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@apps/cli/src/ui/components/cluster-pipeline.component.ts`:
- Around line 85-149: Replace the direct console.log call in
displayClusterPipeline with the project's central CLI logging/display utility:
build the boxed string exactly as before (using boxen(...) and getBoxWidth(...))
and pass that string to the central logger function instead of console.log;
locate the console.log call inside displayClusterPipeline and change it to call
the shared log/display function used across the CLI (import it at top if not
already), preserving formatting, padding and title options so behavior is
identical.
In `@packages/tm-core/src/modules/cluster/services/cluster-sequencer.service.ts`:
- Around line 111-122: When a cluster is skipped because
clusterDetector.isClusterReady(...) returns false, update its status to
'blocked' before continuing and ensure blockedClusters is incremented for these
cases; specifically, in the loop handling detection.clusters (around the
logger.warn call) set cluster.status = 'blocked' when dependencies are
unsatisfied and then increment blockedClusters (instead of only incrementing
when cluster.status === 'blocked') so skipped clusters are transitioned to a
terminal blocked state and counted correctly.
In `@packages/tm-core/src/modules/git/services/pr-body-formatter.ts`:
- Around line 296-304: The commit-list formatting currently calls new
Date(commit.timestamp).toISOString() directly inside commits.forEach, which will
throw on invalid timestamps; update the timestamp handling in
pr-body-formatter.ts (inside the commits.forEach block) to validate
commit.timestamp before converting: check that commit.timestamp is present and
that new Date(commit.timestamp) produces a valid date (e.g.,
!isNaN(date.getTime())) and only then set timestamp = `
(${date.toISOString()})`, otherwise fall back to an empty string or a safe
placeholder; apply the same guard to the other similar block referenced (lines
around the second occurrence).
In
`@packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.ts`:
- Around line 258-347: Tests are flaky because they call out to the gh CLI via
GitHubPRService/child_process; stub GitHubPRService.prototype.createPR at the
top of the spec so it returns deterministic results (e.g., a resolved Promise
with an object containing the dryRun flag and a success indicator) instead of
invoking execFile; ensure the stub inspects the args it receives (or reads
integration.options.dryRun) to return { dryRun: boolean, success: boolean, ... }
so integration.handleClusterCompletion, integration.setDryRun and
integration.updateOptions behavior can be asserted reliably.
In `@packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts`:
- Around line 136-179: The catch block currently treats any error (including
activity logging failures) as a fatal failure for the whole cluster integration;
wrap calls to activity logging so they cannot override a successful PR flow:
ensure this.logPRCreation and this.logPRError (and any activity-log calls after
a successful prResult) are invoked inside their own try/catch blocks that only
log the logging failure (e.g., logger.error) and do not rethrow, and ensure the
final returned success value is based on prResult.success / updateRunStateWithPR
rather than whether logging succeeded.
| export function displayClusterPipeline( | ||
| detection: ClusterDetectionResult, | ||
| tasks: Task[], | ||
| tag?: string | ||
| ): void { | ||
| const levels = groupClustersByLevel(detection.clusters); | ||
| const sortedLevels = [...levels.entries()].sort((a, b) => a[0] - b[0]); | ||
| const parallelTurnCount = sortedLevels.filter( | ||
| ([, clusters]) => clusters.length > 1 | ||
| ).length; | ||
|
|
||
| const taskMap = new Map(tasks.map((t) => [String(t.id), t])); | ||
| const content: string[] = []; | ||
|
|
||
| for (let i = 0; i < sortedLevels.length; i++) { | ||
| const [level, clusters] = sortedLevels[i]; | ||
|
|
||
| // Turn label (fixed-width for alignment) | ||
| const turnLabel = chalk.dim(`Turn ${String(level).padStart(2)}`); | ||
|
|
||
| // Build cluster chips for this turn | ||
| const chips = clusters.map((c) => formatClusterChip(c, taskMap)); | ||
|
|
||
| // Parallel indicator for turns with multiple clusters | ||
| const parallelNote = clusters.length > 1 ? chalk.green(' ⇢ parallel') : ''; | ||
|
|
||
| // Join clusters with a visual separator | ||
| const separator = chalk.gray(' │ '); | ||
| content.push(` ${turnLabel} ${chips.join(separator)}${parallelNote}`); | ||
|
|
||
| // Blank separator between turns for visual breathing room (no arrows) | ||
| if (i < sortedLevels.length - 1) { | ||
| content.push(''); | ||
| } | ||
| } | ||
|
|
||
| content.push(''); | ||
|
|
||
| // Summary line | ||
| const stats = [ | ||
| chalk.white(`${detection.totalClusters} clusters`), | ||
| chalk.white(`${sortedLevels.length} turns`), | ||
| parallelTurnCount > 0 | ||
| ? chalk.green(`${parallelTurnCount} parallel`) | ||
| : chalk.gray('all sequential') | ||
| ]; | ||
| content.push(` ${stats.join(chalk.gray(' · '))}`); | ||
|
|
||
| // Call to action | ||
| const tagArg = tag ? ` --tag ${tag}` : ''; | ||
| content.push( | ||
| ` ${chalk.gray('→')} ${chalk.yellow(`tm clusters${tagArg}`)} ${chalk.gray('for full breakdown')}` | ||
| ); | ||
|
|
||
| console.log( | ||
| boxen(content.join('\n'), { | ||
| padding: 1, | ||
| margin: { top: 1, bottom: 0 }, | ||
| borderStyle: 'round', | ||
| borderColor: '#00CED1', | ||
| title: chalk.hex('#00CED1')('⚡ EXECUTION PIPELINE ⚡'), | ||
| titleAlignment: 'center', | ||
| width: getBoxWidth(0.96) | ||
| }) | ||
| ); |
There was a problem hiding this comment.
Route output through the CLI logging utility instead of console.log.
Direct console.log calls in CLI code should go through the central log/display utility to keep output handling consistent.
As per coding guidelines: "Do not add direct console.log calls outside the logging utility - use the central log function instead".
🤖 Prompt for AI Agents
In `@apps/cli/src/ui/components/cluster-pipeline.component.ts` around lines 85 -
149, Replace the direct console.log call in displayClusterPipeline with the
project's central CLI logging/display utility: build the boxed string exactly as
before (using boxen(...) and getBoxWidth(...)) and pass that string to the
central logger function instead of console.log; locate the console.log call
inside displayClusterPipeline and change it to call the shared log/display
function used across the CLI (import it at top if not already), preserving
formatting, padding and title options so behavior is identical.
| for (const cluster of detection.clusters) { | ||
| if (!this.clusterDetector.isClusterReady(cluster, detection)) { | ||
| this.logger.warn('Cluster not ready, skipping', { | ||
| clusterId: cluster.clusterId, | ||
| status: cluster.status | ||
| }); | ||
|
|
||
| if (cluster.status === 'blocked') { | ||
| blockedClusters++; | ||
| } | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Mark skipped clusters as blocked when dependencies are unsatisfied.
Currently, a non-ready cluster is skipped without a status update, which can leave it in a non-terminal state and undercount blockedClusters.
🔧 Suggested fix
if (!this.clusterDetector.isClusterReady(cluster, detection)) {
this.logger.warn('Cluster not ready, skipping', {
clusterId: cluster.clusterId,
status: cluster.status
});
- if (cluster.status === 'blocked') {
- blockedClusters++;
- }
+ if (cluster.status !== 'blocked' && cluster.status !== 'failed') {
+ this.clusterDetector.updateClusterStatus(
+ detection,
+ cluster.clusterId,
+ 'blocked'
+ );
+ }
+ blockedClusters++;
continue;
}🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/cluster/services/cluster-sequencer.service.ts`
around lines 111 - 122, When a cluster is skipped because
clusterDetector.isClusterReady(...) returns false, update its status to
'blocked' before continuing and ensure blockedClusters is incremented for these
cases; specifically, in the loop handling detection.clusters (around the
logger.warn call) set cluster.status = 'blocked' when dependencies are
unsatisfied and then increment blockedClusters (instead of only incrementing
when cluster.status === 'blocked') so skipped clusters are transitioned to a
terminal blocked state and counted correctly.
| commits.forEach((commit) => { | ||
| const shortSha = commit.sha.substring(0, 7); | ||
| const timestamp = commit.timestamp | ||
| ? ` (${new Date(commit.timestamp).toISOString()})` | ||
| : ''; | ||
| const author = commit.author ? ` by ${commit.author}` : ''; | ||
|
|
||
| parts.push(`- \`${shortSha}\` ${commit.message}${author}${timestamp}`); | ||
| }); |
There was a problem hiding this comment.
Guard against invalid timestamps to avoid formatter crashes.
new Date(...).toISOString() throws on invalid input. A small validity check keeps formatting resilient when timestamps are missing or malformed.
🛠️ Suggested guard for invalid timestamps
- const timestamp = commit.timestamp
- ? ` (${new Date(commit.timestamp).toISOString()})`
- : '';
+ const commitDate = commit.timestamp ? new Date(commit.timestamp) : null;
+ const timestamp =
+ commitDate && !Number.isNaN(commitDate.getTime())
+ ? ` (${commitDate.toISOString()})`
+ : '';
@@
- } else if (runStartTime) {
- parts.push(`- **Started**: ${new Date(runStartTime).toISOString()}`);
+ } else if (runStartTime) {
+ const started = new Date(runStartTime);
+ if (!Number.isNaN(started.getTime())) {
+ parts.push(`- **Started**: ${started.toISOString()}`);
+ }
}Also applies to: 331-336
🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/git/services/pr-body-formatter.ts` around lines
296 - 304, The commit-list formatting currently calls new
Date(commit.timestamp).toISOString() directly inside commits.forEach, which will
throw on invalid timestamps; update the timestamp handling in
pr-body-formatter.ts (inside the commits.forEach block) to validate
commit.timestamp before converting: check that commit.timestamp is present and
that new Date(commit.timestamp) produces a valid date (e.g.,
!isNaN(date.getTime())) and only then set timestamp = `
(${date.toISOString()})`, otherwise fall back to an empty string or a safe
placeholder; apply the same guard to the other similar block referenced (lines
around the second occurrence).
| describe('options management', () => { | ||
| it('should update dry-run mode at runtime', async () => { | ||
| const workflowContext: WorkflowContext = { | ||
| taskId: 'TAS-dry', | ||
| subtasks: [ | ||
| { | ||
| id: 'sub-1', | ||
| title: 'Test subtask', | ||
| status: 'completed', | ||
| attempts: 1 | ||
| } | ||
| ], | ||
| currentSubtaskIndex: 0, | ||
| branchName: 'feature/dry-run-toggle', | ||
| errors: [], | ||
| metadata: {} | ||
| }; | ||
|
|
||
| const event: ClusterCompletionEvent = { | ||
| clusterId: 'cluster-dry-toggle', | ||
| workflowContext, | ||
| branchName: 'feature/dry-run-toggle', | ||
| commits: ['abc123'] | ||
| }; | ||
|
|
||
| // Initially dry-run is true, so PR creation should succeed with dryRun flag | ||
| const dryResult = await integration.handleClusterCompletion(event); | ||
| expect(dryResult.success).toBe(true); | ||
| expect(dryResult.prResult?.dryRun).toBe(true); | ||
|
|
||
| // After toggling dry-run off, gh CLI is unavailable in tests so it should fail | ||
| integration.setDryRun(false); | ||
| const liveResult = await integration.handleClusterCompletion({ | ||
| ...event, | ||
| clusterId: 'cluster-dry-toggle-2' | ||
| }); | ||
| expect(liveResult.success).toBe(false); | ||
| expect(liveResult.prResult?.dryRun).toBe(false); | ||
|
|
||
| // Toggle back to dry-run, should succeed again | ||
| integration.setDryRun(true); | ||
| const dryAgainResult = await integration.handleClusterCompletion({ | ||
| ...event, | ||
| clusterId: 'cluster-dry-toggle-3' | ||
| }); | ||
| expect(dryAgainResult.success).toBe(true); | ||
| expect(dryAgainResult.prResult?.dryRun).toBe(true); | ||
| }); | ||
|
|
||
| it('should update options at runtime', async () => { | ||
| // Verify dryRun option change is observable through behavior | ||
| const workflowContext: WorkflowContext = { | ||
| taskId: 'TAS-opts', | ||
| subtasks: [], | ||
| currentSubtaskIndex: 0, | ||
| branchName: 'feature/opts', | ||
| errors: [], | ||
| metadata: {} | ||
| }; | ||
|
|
||
| const event: ClusterCompletionEvent = { | ||
| clusterId: 'cluster-opts', | ||
| workflowContext, | ||
| branchName: 'feature/opts', | ||
| commits: [] | ||
| }; | ||
|
|
||
| // Update dryRun to false via updateOptions - should fail since gh CLI is unavailable | ||
| integration.updateOptions({ dryRun: false }); | ||
| const liveResult = await integration.handleClusterCompletion(event); | ||
| expect(liveResult.success).toBe(false); | ||
|
|
||
| // Restore dry-run and verify it works again | ||
| integration.updateOptions({ dryRun: true }); | ||
| const dryResult = await integration.handleClusterCompletion({ | ||
| ...event, | ||
| clusterId: 'cluster-opts-2' | ||
| }); | ||
| expect(dryResult.success).toBe(true); | ||
| expect(dryResult.prResult?.dryRun).toBe(true); | ||
|
|
||
| // Bracket notation: autoMerge and labels only affect non-dry-run gh CLI | ||
| // calls, so their effect cannot be observed without mocking the GitHub API. | ||
| integration.updateOptions({ | ||
| autoMerge: true, | ||
| labels: ['hotfix', 'urgent'] | ||
| }); | ||
| expect(integration['options'].autoMerge).toBe(true); | ||
| expect(integration['options'].labels).toEqual(['hotfix', 'urgent']); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if GitHubPRService exists and examine its createPR method
fd "github-pr.service.ts" packages/tm-core/src --type fRepository: eyaltoledano/claude-task-master
Length of output: 137
🏁 Script executed:
# Examine the GitHubPRService for external CLI/API usage
rg -n "createPR|exec|spawn|child_process|gh\s" packages/tm-core/src -A 3 -B 1 --type ts | head -100Repository: eyaltoledano/claude-task-master
Length of output: 6602
🏁 Script executed:
# Check the test file to understand current implementation and mocking
head -100 packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 2491
🏁 Script executed:
# Look for setup/beforeEach in the test file to see current mocking strategy
rg -n "beforeEach|vi\.mock|vi\.spyOn|mock" packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.ts | head -20Repository: eyaltoledano/claude-task-master
Length of output: 242
🏁 Script executed:
# Check the GitHubPRService implementation to see if it actually calls gh CLI
cat -n packages/tm-core/src/modules/git/services/github-pr.service.ts | head -80Repository: eyaltoledano/claude-task-master
Length of output: 2926
🏁 Script executed:
# Check how GitHubPRService is instantiated and used in cluster-pr-integration
rg -n "GitHubPRService|createPR" packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts -B 2 -A 3Repository: eyaltoledano/claude-task-master
Length of output: 1332
🏁 Script executed:
# Look at the problematic test section specifically (lines 258-347)
sed -n '258,347p' packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 2957
🏁 Script executed:
# Check if there are any vi.mock or vi.spyOn calls throughout the test file
cat packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.ts | grep -n "vi\." | head -20Repository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Check the createPR method implementation in GitHubPRService
sed -n '1,250p' packages/tm-core/src/modules/git/services/github-pr.service.ts | tail -100Repository: eyaltoledano/claude-task-master
Length of output: 2510
🏁 Script executed:
# Find where execFile is actually used in GitHubPRService
rg -n "execFileAsync|execFile" packages/tm-core/src/modules/git/services/github-pr.service.ts -A 5 -B 2Repository: eyaltoledano/claude-task-master
Length of output: 1384
🏁 Script executed:
# Check if there are any vitest setup files or mocks configured
find packages/tm-core -name "*.config.*" -o -name "vitest.setup.*" | head -10Repository: eyaltoledano/claude-task-master
Length of output: 108
🏁 Script executed:
# Check if GitHubPRService is mocked at module level anywhere in tests
rg -n "GitHubPRService" packages/tm-core/src -l | xargs grep -l "vi.mock" 2>/dev/null || echo "No mocks found for GitHubPRService"Repository: eyaltoledano/claude-task-master
Length of output: 109
🏁 Script executed:
# Verify the test actually invokes the real service - check full test file for imports and setup
wc -l packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 156
🏁 Script executed:
# Check full test file around line 1-50 for any setup/mocks
head -60 packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 1494
🏁 Script executed:
# Verify the suggested diff approach - check if GitHubPRService.createPR signature matches
sed -n '100,150p' packages/tm-core/src/modules/git/services/github-pr.service.tsRepository: eyaltoledano/claude-task-master
Length of output: 1205
🏁 Script executed:
# Check the full createPR method signature
rg -n "createPR.*{" packages/tm-core/src/modules/git/services/github-pr.service.ts -A 2Repository: eyaltoledano/claude-task-master
Length of output: 160
🏁 Script executed:
# Check what PRCreationResult looks like to verify mock values in suggested diff
rg -n "interface PRCreationResult" packages/tm-core/src/modules/git/services/github-pr.service.ts -A 15Repository: eyaltoledano/claude-task-master
Length of output: 642
🏁 Script executed:
# Verify CreatePROptions interface to ensure mock return values are compatible
rg -n "interface CreatePROptions" packages/tm-core/src/modules/git/services/github-pr.service.ts -A 15Repository: eyaltoledano/claude-task-master
Length of output: 624
🏁 Script executed:
# Double-check test expectations to ensure suggested mock aligns with what tests expect
sed -n '280,295p' packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 684
Mock GitHubPRService.createPR to eliminate environment dependency.
These tests rely on gh CLI being unavailable when dryRun is false, which is environment-dependent and makes tests flaky. Per the coding guideline for @tm/core unit tests, external I/O (child_process.execFile calls to gh CLI at lines 264, 353, 382) must be mocked. Mock GitHubPRService.prototype.createPR to return deterministic results instead.
🛠️ Example: stub GitHubPRService.createPR for deterministic outcomes
-import { describe, it, expect, beforeEach } from 'vitest';
+import { describe, it, expect, beforeEach, vi } from 'vitest';
+import { GitHubPRService } from '../../git/services/github-pr.service.js';
@@
it('should update dry-run mode at runtime', async () => {
+ const createPrSpy = vi
+ .spyOn(GitHubPRService.prototype, 'createPR')
+ .mockResolvedValueOnce({ success: true, dryRun: true })
+ .mockResolvedValueOnce({
+ success: false,
+ dryRun: false,
+ error: 'gh unavailable'
+ })
+ .mockResolvedValueOnce({ success: true, dryRun: true });
// Initially dry-run is true, so PR creation should succeed with dryRun flag
const dryResult = await integration.handleClusterCompletion(event);
@@
// Toggle back to dry-run, should succeed again
integration.setDryRun(true);
const dryAgainResult = await integration.handleClusterCompletion({
...event,
clusterId: 'cluster-dry-toggle-3'
});🤖 Prompt for AI Agents
In
`@packages/tm-core/src/modules/workflow/services/cluster-pr-integration.spec.ts`
around lines 258 - 347, Tests are flaky because they call out to the gh CLI via
GitHubPRService/child_process; stub GitHubPRService.prototype.createPR at the
top of the spec so it returns deterministic results (e.g., a resolved Promise
with an object containing the dryRun flag and a success indicator) instead of
invoking execFile; ensure the stub inspects the args it receives (or reads
integration.options.dryRun) to return { dryRun: boolean, success: boolean, ... }
so integration.handleClusterCompletion, integration.setDryRun and
integration.updateOptions behavior can be asserted reliably.
| // Log activity | ||
| if (this.options.activityLogPath) { | ||
| await this.logPRCreation(prResult, clusterId); | ||
| } | ||
|
|
||
| // Check if PR creation succeeded | ||
| if (!prResult.success) { | ||
| return { | ||
| success: false, | ||
| prResult, | ||
| error: prResult.error, | ||
| clusterId | ||
| }; | ||
| } | ||
|
|
||
| // Update run state if successful | ||
| if (prResult.prUrl) { | ||
| this.updateRunStateWithPR(workflowContext, clusterId, prResult); | ||
| } | ||
|
|
||
| logger.info( | ||
| `Successfully handled cluster ${clusterId}, PR: ${prResult.prUrl || 'dry-run'}` | ||
| ); | ||
|
|
||
| return { | ||
| success: true, | ||
| prResult, | ||
| clusterId | ||
| }; | ||
| } catch (error) { | ||
| const errorMessage = | ||
| error instanceof Error ? error.message : String(error); | ||
| logger.error(`Failed to handle cluster ${clusterId}:`, error); | ||
|
|
||
| // Log error activity | ||
| if (this.options.activityLogPath) { | ||
| try { | ||
| await this.logPRError(clusterId, errorMessage); | ||
| } catch (logError) { | ||
| logger.error('Failed to log PR error activity', { | ||
| logError, | ||
| originalError: error | ||
| }); | ||
| } |
There was a problem hiding this comment.
Avoid failing the whole operation when activity logging fails.
If the PR is created successfully but activity logging fails, the catch block marks the whole integration as failed. Consider isolating logging failures so they don't override PR success.
🛠️ Suggested guard for activity logging
- if (this.options.activityLogPath) {
- await this.logPRCreation(prResult, clusterId);
- }
+ if (this.options.activityLogPath) {
+ try {
+ await this.logPRCreation(prResult, clusterId);
+ } catch (logError) {
+ logger.warn('Failed to log PR creation activity', {
+ logError,
+ clusterId
+ });
+ }
+ }🤖 Prompt for AI Agents
In `@packages/tm-core/src/modules/workflow/services/cluster-pr-integration.ts`
around lines 136 - 179, The catch block currently treats any error (including
activity logging failures) as a fatal failure for the whole cluster integration;
wrap calls to activity logging so they cannot override a successful PR flow:
ensure this.logPRCreation and this.logPRError (and any activity-log calls after
a successful prResult) are invoked inside their own try/catch blocks that only
log the logging failure (e.g., logger.error) and do not rethrow, and ensure the
final returned success value is based on prResult.success / updateRunStateWithPR
rather than whether logging succeeded.
9ac6f87 to
79ca917
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| if (detection.hasCircularDependencies) { | ||
| this.renderCircularDependencyError(detection); | ||
| throw new Error('Circular dependency detected'); | ||
| } |
There was a problem hiding this comment.
Circular dependency renders error then re-displays via throw
Medium Severity
When circular dependencies are detected, renderCircularDependencyError prints detailed error info to stderr, then a throw new Error('Circular dependency detected') is raised. The catch in executeCommand passes this to displayError, which prints a redundant generic "Error: Circular dependency detected" message and calls process.exit(1). The user sees the error printed twice — once with useful detail, once as a bare message.
Additional Locations (1)
| const fromNode = edge.from.replace('-', '_'); | ||
| const toNode = edge.to.replace('-', '_'); | ||
| lines.push(` ${fromNode} -.-> ${toNode}`); | ||
| } |
There was a problem hiding this comment.
Skip-edge computation is dead code producing empty results
Low Severity
In buildMermaidSyntax, the spineReachable set (Steps 3–5) computes all O(n²) ordered pairs from the sorted clusters. Since all downstream edges in a DAG go from lower to higher topological order, every edge is already in spineReachable. The skipEdges array is therefore always empty, making Steps 3, 4, 5, and 7 dead code with unnecessary O(n²) time and space complexity.
| levels.set(cluster.level, group); | ||
| } | ||
| return levels; | ||
| } |
There was a problem hiding this comment.
Duplicated groupClustersByLevel in two files
Low Severity
groupClustersByLevel is implemented identically as a private method in ClustersCommand and as a standalone function in cluster-pipeline.component.ts. This duplication means a bug fix in one location could be missed in the other. Extracting to a shared utility would reduce maintenance burden.
Additional Locations (1)
| downstream.error = `Blocked by upstream cluster: ${clusterId}`; | ||
| this.blockDownstreamClusters(result, downstreamId); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Recursive block propagation misattributes error to intermediate cluster
Medium Severity
blockDownstreamClusters recursively blocks downstream clusters, but on each recursive call it passes the intermediate blocked cluster's ID as clusterId. This means deeply nested downstream clusters get error messages like "Blocked by upstream cluster: cluster-1" when cluster-1 is itself blocked — the real root cause (e.g., cluster-0 that failed) is lost. The original clusterId that triggered the cascade needs to be propagated through the recursion.


Summary
task-master clusterscommand for viewing tag-level and task-level execution clusters with Mermaid diagram outputtask-master listshowing per-cluster progress with lane-based visualizationTest plan
tm listoutput with tagged taskstm clusters --tag <tag>output🤖 Generated with Claude Code
Note
Medium Risk
Touches core dependency analysis/execution primitives and expands tag metadata/storage interfaces, which could impact task ordering and persistence across storages. Most changes are additive and covered by extensive new unit tests, but correctness in edge cases (deps/subtasks/paging/less availability) is key.
Overview
Introduces a new cluster/execution-phase layer in
@tm/corethat detects dependency-based clusters (including subtasks), executes them with bounded parallelism, and tracks progress/events with optional checkpoint persistence.Extends the CLI with
tm clusters(tag-level and per-tag task-level views) including table/tree/JSON outputs and Mermaid (pretty/raw) diagrams with paging, and augmentstm listto show an Execution Pipeline box summarizing per-cluster progress.Adds inter-tag dependency support (
dependsOnmetadata) with newtm tags add-dep,remove-dep, anddepscommands plus circular-dependency validation; also exports new core APIs and adds an MCP tool (autopilot_create_pr) to create GitHub PRs per cluster.Written by Cursor Bugbot for commit 79ca917. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
task-master clusterscommand with tree, Mermaid (pretty/raw) and JSON outputs to detect and visualize execution clusters.task-master listnow shows an Execution Pipeline box with per-cluster progress and parallel lane visualization.task-master tags add-dep,remove-dep, anddeps(list).