Skip to content

feat: AI-powered inter-tag cluster generation#1632

Merged
Crunchyman-ralph merged 5 commits intonextfrom
ralph/feat/clustergeneration
Feb 14, 2026
Merged

feat: AI-powered inter-tag cluster generation#1632
Crunchyman-ralph merged 5 commits intonextfrom
ralph/feat/clustergeneration

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Feb 14, 2026

Summary

  • AI cluster generation (tm clusters generate): Analyzes tags semantically with AI, synthesizes inter-tag dependencies, and suggests a topological execution order with reasoning
  • Interactive cluster editor: TUI for reviewing, editing, and accepting/rejecting AI-suggested dependencies before persisting
  • Auto-prompt on empty clusters: When tm clusters detects no inter-tag dependencies, it offers to run AI generation inline
  • Caching layer: TagAnalysisCache avoids redundant AI calls for previously analyzed tags
  • Stable AI service loading: loadGenerateObjectService() in @tm/core replaces fragile relative import paths
  • Auto-mode safety: --auto flag now warns when replacing existing inter-tag dependencies

Architecture

Follows domain-driven design in @tm/core:

  • cluster/generation/ClusterGenerationService, semantic analyzer, dependency synthesizer, cache
  • ai/PromptBuilder, BridgedStructuredGenerator, loadGenerateObjectService (legacy bridge)
  • CLI is a thin presentation layer calling @tm/core facades

New files (32 changed, +2311 lines)

Area Key files
Core service cluster-generation.service.ts, tag-semantic-analyzer.ts, tag-dependency-synthesizer.ts
Caching tag-analysis-cache.ts
AI infra legacy-ai-loader.ts, PromptBuilder, BridgedStructuredGenerator
CLI cluster-generate.command.ts, cluster-editor.component.ts, cluster-layout-renderer.ts
Tests cluster-generation.service.spec.ts, tag-analysis-cache.spec.ts

Test plan

  • Run tm clusters generate with 2+ tags — verify AI analysis and interactive editor
  • Run tm clusters generate --auto with existing deps — verify yellow warning appears
  • Run tm clusters generate --json — verify JSON output
  • Run tm clusters generate --no-cache — verify fresh analysis
  • Run tm clusters with no deps — verify auto-prompt to generate
  • Run unit tests: npm run test -w @tm/core
  • Verify typecheck: npm run turbo:typecheck

🤖 Generated with Claude Code


Note

Medium Risk
Adds AI-driven dependency generation and new write-paths that can replace existing tag dependencies; correctness depends on AI output and rollback logic during persistence.

Overview
Adds AI-powered inter-tag cluster generation: new tm clusters generate analyzes tag/task content, synthesizes dependsOn suggestions, and can persist them either via --auto or an interactive in-terminal editor (with non-interactive --json output).

Enhances tm clusters so that when no inter-tag dependencies exist it can prompt (or auto-run via --auto) the same generation flow inline, then re-renders clusters after saving.

Introduces @tm/core infrastructure and domain code to support this: cluster/generation pipeline (ClusterGenerationService, semantic analyzer, dependency synthesizer) with an analysis cache (TagAnalysisCache persisted under .taskmaster/cache/cluster-analysis.json), plus a centralized loadGenerateObjectService() bridge for the legacy AI module and supporting prompt/structured-generation utilities and tests.

Updates repo metadata/docs: adds a changeset for a major release, documents module-organization rules in CLAUDE.md, and extends .taskmaster/tasks/tasks.json tag metadata with dependsOn relationships.

Written by Cursor Bugbot for commit 3acc113. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

  • New Features

    • AI-powered "tm clusters generate" command with progress, --auto non‑interactive mode, --json output, and an in‑terminal interactive editor to inspect/reorder/accept suggestions
    • Inline AI generation integrated into cluster listing with one‑step persistence and refreshed rendering
    • Improved CLI cluster layout rendering with confidence badges and clearer reasoning
  • Bug Fixes

    • Faster repeat runs via analysis caching to skip re‑analyzed tags
  • Documentation

    • Added module organization guidelines and AI cluster generation docs

@changeset-bot
Copy link

changeset-bot bot commented Feb 14, 2026

🦋 Changeset detected

Latest commit: 3acc113

The changes in this PR will be included in the next version bump.

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 14, 2026

Caution

Review failed

The head commit changed during the review from aa77bd5 to 3acc113.

📝 Walkthrough

Walkthrough

Adds AI-powered inter-tag cluster generation: new CLI command, interactive editor/renderer, AI bridge and prompt builder, semantic analysis + dependency synthesizer, caching, core ClusterGenerationService, tests, and tm-core public exports.

Changes

Cohort / File(s) Summary
Release & Docs
\.changeset/ai-cluster-generation.md, CLAUDE.md
Version bump and added module-organization docs (duplicate block in CLAUDE.md).
CLI: New Command
apps/cli/src/commands/cluster-generate.command.ts
New tm clusters generate command, runClusterGeneration workflow, cache helpers, persistence with rollback, and options --auto, --json, --no-cache, --project.
CLI: Integration
apps/cli/src/commands/clusters.command.ts
Registers ClusterGenerateCommand, adds --auto flag, inline AI generation flow, interactive editor integration, and re-render/persist cycles.
CLI: UI Components
apps/cli/src/ui/components/cluster-editor.component.ts, apps/cli/src/ui/components/cluster-layout-renderer.ts
Interactive in-terminal cluster editor (move/reset/accept/cancel) and pretty ASCII renderer with dependency confidence and reasoning.
CLI: Types
apps/cli/src/types/ai-services-unified.d.ts
Adds declaration for legacy generateObjectService bridge used by CLI AI integrations.
Core: AI infra
packages/tm-core/src/modules/ai/..., packages/tm-core/src/modules/ai/legacy-ai-loader.ts
PromptBuilder, AI primitive types, structured-generation interfaces, BridgedStructuredGenerator, and loader to lazily import legacy object service; reorganized ai barrel exports.
Core: Cluster generation service & tests
packages/tm-core/src/modules/cluster/generation/cluster-generation.service.ts, *.spec.ts
New ClusterGenerationService (analyze → synthesize → cluster) with concurrency, progress callbacks, cache integration, and extensive unit tests.
Core: Semantic analyzer
packages/tm-core/src/modules/cluster/generation/tag-semantic-analyzer.*
ITagSemanticAnalyzer interface, BridgedTagSemanticAnalyzer implementation, prompt template, and SemanticAnalysis type.
Core: Dependency synthesizer
packages/tm-core/src/modules/cluster/generation/tag-dependency-synthesizer.*
ITagDependencySynthesizer interface, BridgedTagDependencySynthesizer, prompt template, and DependencySuggestion type.
Core: Tag analysis cache & tests
packages/tm-core/src/modules/cluster/generation/tag-analysis-cache.ts, *.spec.ts
CacheStorage abstraction, deterministic computeHash, get/set semantics, and tests for cache correctness and hashing.
Core: Exports / Barrels
packages/tm-core/src/index.ts, packages/tm-core/src/modules/cluster/index.ts, packages/tm-core/src/modules/cluster/generation/index.ts, packages/tm-core/src/modules/ai/index.ts
Expanded public API to expose AI primitives, structured generator bridge, analyzers, synthesizers, cache types, and cluster-generation service.
Tests: Coverage
packages/tm-core/src/modules/cluster/generation/*.spec.ts
Extensive tests for generation service and cache covering concurrency, cache hits/misses, filtering, and progress reporting.
Misc: Config / Cache
.superset/config.json, .taskmaster/cache/cluster-analysis.json
Adds small superset config and an AI analysis cache JSON with precomputed entries.

Sequence Diagram

sequenceDiagram
    actor User
    participant CLI as "CLI Command"
    participant Core as "TmCore"
    participant Service as "ClusterGenerationService"
    participant Cache as "TagAnalysisCache"
    participant Analyzer as "TagSemanticAnalyzer"
    participant Synth as "TagDependencySynthesizer"
    participant Editor as "ClusterEditor"
    participant Storage as "Cache Storage"

    User->>CLI: tm clusters generate [--auto] [--json]
    CLI->>Core: initialize & fetch tags
    CLI->>Service: generate(tags, onProgress)

    Service->>Cache: load cached entries
    Cache->>Storage: load()
    Storage-->>Cache: cache file

    Service->>Analyzer: analyze uncached tag (concurrent batches)
    Analyzer-->>Service: SemanticAnalysis
    Service->>Cache: set(new analyses)
    Cache->>Storage: save(updated)

    Service->>Synth: synthesize(all analyses)
    Synth-->>Service: DependencySuggestion[]

    Service->>Service: compute clusters & reasoning
    Service-->>CLI: ClusterSuggestion

    alt --json
        CLI-->>User: JSON output
    else --auto
        CLI->>Core: persistClusterDependencies()
        Core-->>CLI: persisted
        CLI-->>User: rendered layout
    else interactive
        CLI->>Editor: editClusters(clusters, dependencies)
        Editor-->>User: display & accept/move/reset/cancel
        Editor-->>CLI: ClusterEditorResult
        alt accepted
            CLI->>Core: persistClusterDependencies()
            Core-->>CLI: persisted
            CLI-->>User: updated layout
        else cancelled
            CLI-->>User: aborted
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • eyaltoledano
🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 17.65% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: AI-powered inter-tag cluster generation' directly and clearly summarizes the main change—adding AI-driven functionality to generate and suggest inter-tag dependencies with clustering.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into next

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ralph/feat/clustergeneration

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/cluster-generate.command.ts`:
- Around line 104-136: The rollback in persistClusterDependencies currently
rethrows the original add error but ignores failures that occur while restoring
snapshot dependencies; update the catch block in persistClusterDependencies to
catch and surface restore failures: when catching the initial error from
tmCore.tasks.addTagDependency, attempt to restore each snapshot entry but wrap
each restore call (tmCore.tasks.addTagDependency and
tmCore.tasks.removeTagDependency where applicable) in its own try/catch, collect
any restore errors (including which tagName/dep failed) and log them via the
same logger or attach them to a new AggregateError, then throw a combined error
that includes both the original add error and any restore errors so callers know
the full failure set.
🧹 Nitpick comments (5)
.changeset/ai-cluster-generation.md (1)

1-6: Expand the changeset with user-facing detail + examples.
This is a major new CLI command; the current single-line summary undersells the feature and lacks usage examples. Consider adding a short “Highlights” list plus a few command examples.

✍️ Suggested changeset expansion
 Add `tm clusters generate` command that uses AI to analyze tags in parallel, suggest inter-tag dependencies, and present an interactive terminal UI to review and re-order the cluster layout before persisting. Supports `--auto` for non-interactive mode and `--json` for machine-readable output.
+
+Highlights:
+- AI analyzes tag semantics and proposes inter-tag dependencies with reasoning.
+- Interactive TUI to edit, reorder, accept/reject dependencies before saving.
+- Cache prevents redundant AI calls and speeds up repeat runs.
+
+Examples:
+- `tm clusters generate`
+- `tm clusters generate --auto`
+- `tm clusters generate --json`

Based on learnings: For major feature additions like new CLI commands, eyaltoledano prefers detailed changesets with comprehensive descriptions, usage examples, and feature explanations rather than minimal single-line summaries.

packages/tm-core/src/modules/ai/legacy-ai-loader.ts (1)

23-24: Well-implemented temporary bridge—add path resolution test.

The hardcoded 5-level relative path correctly resolves to the target file and is properly documented as a temporary measure. The implementation correctly uses import.meta.url and path.resolve() for ESM path resolution, includes a webpackIgnore comment, and validates that the imported function exists with comprehensive error handling.

Add a test verifying the path resolution at runtime to catch breakage early if the module structure changes.

packages/tm-core/src/modules/cluster/generation/tag-analysis-cache.ts (1)

40-54: Potential race condition in concurrent cache writes.

The set method uses a read-modify-write pattern (load → merge → save) without locking. If multiple analyses complete concurrently, one write could overwrite another's entry. In practice, the current batched analysis flow in ClusterGenerationService (with MAX_CONCURRENCY = 5) means multiple tags could finish simultaneously.

Consider either:

  1. Making cache writes sequential (queue them)
  2. Using atomic file operations with read-modify-write in storage
  3. Accepting this as a minor edge case (cache miss just triggers re-analysis)

Given that a cache miss only costs an extra AI call rather than data loss, this is acceptable for now but worth noting.

packages/tm-core/src/modules/cluster/generation/cluster-generation.service.ts (1)

69-135: LGTM with minor optimization opportunity.

The batched analysis flow with MAX_CONCURRENCY = 5 is well-designed. Cache hits are correctly identified upfront, and progress reporting provides good UX.

One minor optimization: the cache lookup loop (lines 78-91) is sequential. For many tags, parallel lookups could be faster:

const lookupResults = await Promise.all(
  tags.map(async (tag) => {
    const hash = TagAnalysisCache.computeHash(tag);
    const hit = this.cache ? await this.cache.get(tag.name, hash) : null;
    return { tag, hash, hit };
  })
);

This is optional since cache I/O is typically fast.

apps/cli/src/commands/cluster-generate.command.ts (1)

138-155: Cache file parsing lacks validation.

JSON.parse(raw) as CacheFile (line 145) trusts that the file content matches the expected schema. A corrupted or manually-edited cache file could cause runtime errors downstream.

Consider adding basic validation or using a try-catch with a fallback to null:

🛡️ Suggested defensive parsing
 		load: async (): Promise<CacheFile | null> => {
 			try {
 				const raw = await fs.readFile(cachePath, 'utf-8');
-				return JSON.parse(raw) as CacheFile;
+				const parsed = JSON.parse(raw);
+				// Basic shape validation
+				if (parsed?.version === 1 && typeof parsed?.entries === 'object') {
+					return parsed as CacheFile;
+				}
+				return null;
 			} catch {
 				return null;
 			}
 		},

Comment on lines 104 to 136
export async function persistClusterDependencies(
tmCore: TmCore,
allTagNames: readonly string[],
dependencies: readonly DependencySuggestion[]
): Promise<void> {
// Snapshot existing dependencies before removal so we can restore on failure
const snapshot = new Map<string, readonly string[]>();
for (const tagName of allTagNames) {
const existingDeps = await tmCore.tasks.getTagDependencies(tagName);
snapshot.set(tagName, existingDeps);
}

// Remove all existing deps, then add new ones
for (const tagName of allTagNames) {
for (const dep of snapshot.get(tagName) ?? []) {
await tmCore.tasks.removeTagDependency(tagName, dep);
}
}

try {
for (const dep of dependencies) {
await tmCore.tasks.addTagDependency(dep.from, dep.to);
}
} catch (error) {
// Restore original dependencies from snapshot
for (const [tagName, deps] of snapshot) {
for (const dep of deps) {
await tmCore.tasks.addTagDependency(tagName, dep);
}
}
throw error;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Restore failure in rollback could leave inconsistent state.

The rollback logic (lines 129-133) attempts to restore dependencies if adding new ones fails, but if the restore itself fails, the error is swallowed and the original add error is re-thrown. This could leave the system in an inconsistent state (some deps removed, new ones partially added, restore failed).

Consider logging restore failures or wrapping the entire operation differently:

🛡️ Suggested improvement
 	} catch (error) {
 		// Restore original dependencies from snapshot
+		const restoreErrors: Error[] = [];
 		for (const [tagName, deps] of snapshot) {
 			for (const dep of deps) {
-				await tmCore.tasks.addTagDependency(tagName, dep);
+				try {
+					await tmCore.tasks.addTagDependency(tagName, dep);
+				} catch (restoreError) {
+					restoreErrors.push(restoreError as Error);
+				}
 			}
 		}
+		if (restoreErrors.length > 0) {
+			console.error(chalk.red(`Warning: Failed to restore ${restoreErrors.length} dependencies during rollback`));
+		}
 		throw error;
 	}
🤖 Prompt for AI Agents
In `@apps/cli/src/commands/cluster-generate.command.ts` around lines 104 - 136,
The rollback in persistClusterDependencies currently rethrows the original add
error but ignores failures that occur while restoring snapshot dependencies;
update the catch block in persistClusterDependencies to catch and surface
restore failures: when catching the initial error from
tmCore.tasks.addTagDependency, attempt to restore each snapshot entry but wrap
each restore call (tmCore.tasks.addTagDependency and
tmCore.tasks.removeTagDependency where applicable) in its own try/catch, collect
any restore errors (including which tagName/dep failed) and log them via the
same logger or attach them to a new AggregateError, then throw a combined error
that includes both the original add error and any restore errors so callers know
the full failure set.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/cluster-generate.command.ts`:
- Around line 118-137: The rollback currently only restores the original
snapshot and doesn't remove any partially-added new dependencies, so if some
tmCore.tasks.addTagDependency calls succeed before a failure you end up with
mixed dependencies; modify the add loop to track successful additions (e.g.,
push {from, to} into a succeededAdds array when
tmCore.tasks.addTagDependency(dep.from, dep.to) resolves), and in the catch
block first remove all succeededAdds via tmCore.tasks.removeTagDependency(from,
to) before restoring the original snapshot, and ensure any errors during restore
are propagated (don't swallow them) by rethrowing the original error or
aggregating errors as needed; reference snapshot, dependencies, allTagNames,
tmCore.tasks.addTagDependency, and tmCore.tasks.removeTagDependency when
applying the fix.

In `@apps/cli/src/ui/components/cluster-editor.component.ts`:
- Around line 100-146: The loop uses direct console calls (console.clear,
console.log) which must be replaced with the project's central logging utility;
update the block that renders renderTagClusterLayout(state.clusters,
state.dependencies, reasoning) and subsequent blank line to call the central
log/info method instead, replace console.clear() with the logging utility's
terminal-clear or equivalent method, and change the warning print inside the
'move' branch (console.log(chalk.yellow(...))) to the logging utility's
warning/error method; keep existing behavior and strings, and reference the same
symbols: renderTagClusterLayout, state, reasoning, and the inquirer.prompt flow.

In
`@packages/tm-core/src/modules/ai/structured-generation/structured-generator.ts`:
- Around line 42-67: In generate
(packages/tm-core/src/modules/ai/structured-generation/structured-generator.ts)
you currently cast result.mainResult to T without validating it; add a guard
after the await this.generateObjectService(...) call to check that result and
result.mainResult are present and non-undefined, and if not throw a clear, early
error (include context such as objectName/commandName/modelId/providerName) so
the failure is explicit instead of returning undefined; ensure the validated
value is then used for data: (not cast blindly) and preserve the existing
telemetry and fallback model/provider behavior.
🧹 Nitpick comments (3)
packages/tm-core/src/modules/cluster/generation/tag-dependency-synthesizer.ts (1)

41-55: Skip AI calls when there’s nothing to analyze.

If analyses is empty, we can return immediately and avoid an unnecessary AI call.

♻️ Proposed refactor
 	async synthesize(
 		analyses: ReadonlyArray<{ label: string; analysis: SemanticAnalysis }>,
 		options: AIPrimitiveOptions
 	): Promise<AIPrimitiveResult<readonly DependencySuggestion[]>> {
-		const startTime = Date.now();
+		if (analyses.length === 0) {
+			return {
+				data: [],
+				usage: {
+					inputTokens: 0,
+					outputTokens: 0,
+					model: 'unknown',
+					provider: 'unknown',
+					duration: 0
+				}
+			};
+		}
+
+		const startTime = Date.now();
packages/tm-core/src/modules/cluster/generation/cluster-generation.service.ts (1)

125-128: Cache write is fire-and-forget; failures are silently ignored.

If this.cache.set() throws, the error propagates and fails the entire batch. Consider wrapping the cache write in a try-catch to make caching best-effort, preventing cache failures from breaking analysis:

🛡️ Optional defensive improvement
 					if (this.cache) {
 						const hash = hashByTag.get(tag.name)!;
-						await this.cache.set(tag.name, hash, analysis);
+						try {
+							await this.cache.set(tag.name, hash, analysis);
+						} catch {
+							// Cache write failure is non-fatal; analysis proceeds
+						}
 					}
apps/cli/src/commands/cluster-generate.command.ts (1)

148-156: Consider using readJSON utility for JSON file operations.

Per coding guidelines, JSON file operations should use readJSON/writeJSON utilities with proper error handling and validation. The current implementation has no schema validation for the parsed cache file.

🛡️ Suggested improvement for robustness
 		load: async (): Promise<CacheFile | null> => {
 			try {
 				const raw = await fs.readFile(cachePath, 'utf-8');
-				return JSON.parse(raw) as CacheFile;
+				const parsed = JSON.parse(raw);
+				// Basic structural validation
+				if (parsed?.version !== 1 || typeof parsed?.entries !== 'object') {
+					return null;
+				}
+				return parsed as CacheFile;
 			} catch {
 				return null;
 			}
 		},

As per coding guidelines: "Include error handling for JSON file operations and validate JSON structure after reading."

Keep both ClusterExecutionDomain (from next) and ClusterGenerationService
(from feature branch). Restore legacy-ai-loader.ts and its export lost
during merge. Register both start and generate subcommands.
…ependencies

Extract reusable `runClusterGeneration()` and `persistClusterDependencies()`
helpers from ClusterGenerateCommand so both `tm clusters generate` and the
new inline prompt in `tm clusters` can share the same generation logic.

When `tm clusters` detects zero inter-tag dependencies in TTY mode, it now
prompts "Generate cluster ordering with AI?" instead of just printing a hint.
Non-interactive modes (--json, --tree, --diagram, piped) bypass the prompt.
Adds --auto flag to skip the prompt and auto-generate.
Move generateObjectService loading from fragile relative import path
into @tm/core via loadGenerateObjectService(). Add warning log when
--auto mode replaces existing inter-tag dependencies.
@Crunchyman-ralph Crunchyman-ralph force-pushed the ralph/feat/clustergeneration branch 4 times, most recently from 7d8dbcd to 8f05526 Compare February 14, 2026 20:40
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/cli/src/commands/clusters.command.ts (1)

206-317: ⚠️ Potential issue | 🟡 Minor

Route new console output through the logging utility.
The added inline‑generation flow introduces multiple console.log calls; please switch these to the central logger so silent mode and log routing are consistent.

As per coding guidelines, "Do not add direct console.log calls outside the logging utility - use the central log function instead."

🤖 Fix all issues with AI agents
In `@apps/cli/src/commands/clusters.command.ts`:
- Around line 194-223: The --auto flag is currently only honoured inside the
isInteractive branch, so CI/non‑TTY runs never trigger generation; modify the
logic around options.auto and isInteractive in the block that checks
allIndependent so that options.auto can trigger runInlineGenerate even when
process.stdin.isTTY is false. Specifically, when detection.clusters indicates
all independent and tagsResult.tags.length >= 2, allow either options.auto OR
(isInteractive && await this.promptForGeneration()) to decide generation; call
this.runInlineGenerate(options, tagsResult.tags) when that combined condition is
true (keep the existing console messages for the non‑generation path).
🧹 Nitpick comments (2)
apps/cli/src/ui/components/cluster-editor.component.ts (1)

54-77: Cartesian product may generate excessive dependencies.

The recalculateDependencies function creates a dependency from every tag in level N to every tag in level N-1. For levels with many tags, this could produce an unexpectedly large number of dependencies (e.g., 5 tags × 5 tags = 25 dependencies per level transition).

Consider whether this is the intended behavior or if a simpler "level depends on previous level" relationship would suffice.

packages/tm-core/src/modules/cluster/generation/cluster-generation.service.ts (1)

100-135: Consider error handling for individual tag analysis failures.

If this.analyzer.analyze() throws for one tag in a batch, Promise.all will reject the entire batch, losing partial progress. Depending on desired behavior, you may want to use Promise.allSettled and handle individual failures gracefully, or at minimum wrap with try/catch to provide better error context.

💡 Optional: Use allSettled for partial failure tolerance
-			const batchResults = await Promise.all(
-				batch.map(async (tag, batchIndex) => {
+			const batchSettled = await Promise.allSettled(
+				batch.map(async (tag, batchIndex) => {
 					// ... existing logic ...
 				})
 			);
-
-			results.push(...batchResults);
+
+			for (const result of batchSettled) {
+				if (result.status === 'fulfilled') {
+					results.push(result.value);
+				} else {
+					// Log or handle the failure for this tag
+					throw new Error(`Analysis failed: ${result.reason}`);
+				}
+			}

@Crunchyman-ralph Crunchyman-ralph force-pushed the ralph/feat/clustergeneration branch from 8f05526 to aa77bd5 Compare February 14, 2026 20:49
Track successful dependency additions during persist so rollback
removes partial adds before restoring the snapshot. Add validation
guard in BridgedStructuredGenerator to throw descriptive errors
when AI service returns null/undefined instead of failing silently.
@Crunchyman-ralph Crunchyman-ralph force-pushed the ralph/feat/clustergeneration branch from aa77bd5 to 3acc113 Compare February 14, 2026 20:51
@Crunchyman-ralph Crunchyman-ralph merged commit 3d84023 into next Feb 14, 2026
9 checks passed
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

{
"setup": [],
"teardown": []
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unreferenced .superset/config.json committed to repository

Low Severity

The .superset/config.json file is added in this commit but is not referenced anywhere in the codebase — no imports, no code reads from this path, and no documentation mentions it. This appears to be an unrelated config file that was accidentally staged and committed alongside the cluster generation feature.

Fix in Cursor Fix in Web


if (shouldGenerate) {
await this.runInlineGenerate(options, tagsResult.tags);
return;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inline generation ignores --json flag when --auto is set

Medium Severity

In renderTagClusters, the --json early return at line 181 only fires when there ARE existing dependencies. When all tags are independent, options.json is checked at line 181 but renderTagJson returns the empty detection. The shouldGenerate check at line 208 evaluates options.auto first, bypassing the isInteractive guard that excludes --json. Inside runInlineGenerate, the --auto path always renders visual output, unlike ClusterGenerateCommand which correctly checks --json before --auto.

Additional Locations (1)

Fix in Cursor Fix in Web

if (this.cache) {
const hash = hashByTag.get(tag.name)!;
await this.cache.set(tag.name, hash, analysis);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concurrent cache writes cause silent entry loss

Medium Severity

Within each batch of up to MAX_CONCURRENCY (5) concurrent AI calls, every completed analysis writes to the cache via cache.set(). The set method performs a non-atomic read-modify-write: it loads the full file, merges one entry, and saves the full file. When multiple set calls from the same batch overlap, later writes overwrite earlier ones, silently discarding cached entries. This defeats the purpose of the caching layer — subsequent runs re-analyze tags that were already successfully analyzed, wasting AI API calls and time.

Additional Locations (1)

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant