Skip to content

ralph/feat.add.workflow#1197

Closed
Crunchyman-ralph wants to merge 4 commits intonextfrom
ralph/feat.add.workflow
Closed

ralph/feat.add.workflow#1197
Crunchyman-ralph wants to merge 4 commits intonextfrom
ralph/feat.add.workflow

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Sep 11, 2025

What type of PR is this?

  • 🐛 Bug fix
  • ✨ Feature
  • 🔌 Integration
  • 📝 Docs
  • 🧹 Refactor
  • Other:

Description

Related Issues

How to Test This

# Example commands or steps

Expected result:

Contributor Checklist

  • Created changeset: npm run changeset
  • Tests pass: npm test
  • Format check passes: npm run format-check (or npm run format to fix)
  • Addressed CodeRabbit comments (if any)
  • Linked related issues (if any)
  • Manually tested the changes

Changelog Entry


For Maintainers

  • PR title follows conventional commits
  • Target branch correct
  • Labels added
  • Milestone assigned (if applicable)

Summary by CodeRabbit

  • New Features

    • New @tm/cli with commands: tasks list, auth (login/logout/status/refresh), and workflow (start/list/status/stop).
    • OAuth-based authentication with token refresh and status.
    • Workflow engine integration for running tasks in isolated worktrees.
    • Core library (@tm/core) with config management and file/API storage.
  • Improvements

    • Dist-based executables and streamlined startup (no sync FS reads).
    • Consistent CLI UI with colored output and multiple formats (text/json/compact).
  • Documentation

    • Migration roadmap, CLI commander pattern, tm-core architecture, and package READMEs.
  • Chores

    • Monorepo expansion, build scripts, TS/tsup configs, and VS Code formatter settings.

@changeset-bot
Copy link

changeset-bot bot commented Sep 11, 2025

⚠️ No Changeset found

Latest commit: d4826e0

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 11, 2025

Walkthrough

Introduces a monorepo architecture with new @tm/core and @tm/workflow-engine packages and a new @tm/cli app. Adds core domain/types, config/auth/storage subsystems, TaskMasterCore facade, workflow orchestration, and multiple CLI commands (list, auth, workflow). Refactors scripts to consume new packages, updates packaging/build (tsup), and adds docs/reports.

Changes

Cohort / File(s) Summary
Monorepo setup & packaging
package.json, .vscode/settings.json, packages/*/tsconfig.json, packages/*/tsup.config.ts, packages/*/vitest.config.ts, apps/cli/*
Adds workspace entries, dist-based bins, build/dev scripts, uses tsup; configures TypeScript/formatters; creates @tm/cli package with build/test/lint setup and entrypoints.
Documentation & planning artifacts
.taskmaster/docs/MIGRATION-ROADMAP.md, docs/CLI-COMMANDER-PATTERN.md, .taskmaster/reports/*, packages/tm-core/{README.md,CHANGELOG.md,POC-STATUS.md}, packages/workflow-engine/README.md
Adds migration roadmap, CLI pattern doc, complexity reports, and package READMEs/CHANGELOG; no runtime code.
Project state & tasks metadata
.taskmaster/state.json, .taskmaster/tasks/tasks.json
Updates timestamp; adds tm-core phase-1 tasks block with nested tasks/subtasks.
CLI commands & utilities
apps/cli/src/index.ts, apps/cli/src/commands/*, apps/cli/src/commands/workflow/*, apps/cli/src/utils/ui.ts
Implements Commander-based commands: list tasks, auth (login/logout/status/refresh), workflow (start/list/status/stop); central registrar; UI helpers for tables, banners, colors.
Core: public API & facade
packages/tm-core/src/index.ts, packages/tm-core/src/task-master-core.ts, packages/tm-core/src/constants/index.ts, packages/tm-core/src/entities/task.entity.ts
Adds TaskMasterCore facade, consolidated exports, task domain entity, and constants (statuses, priorities, formats).
Core: types & interfaces
packages/tm-core/src/types/*, packages/tm-core/src/interfaces/*
Introduces task models, filters, guards; configuration contracts and defaults; AI provider interfaces/base class; storage contracts/base class.
Core: config subsystem
packages/tm-core/src/config/*, packages/tm-core/src/config/services/*, tests *.spec.ts
Adds ConfigManager and services (loader, merger, persistence, runtime state, env provider) with unit/integration tests.
Core: auth subsystem
packages/tm-core/src/auth/*, tests auth-*.test.ts
Adds AuthManager singleton, CredentialStore (file-backed), OAuthService (local callback), types/config, Supabase client wrapper; tests for manager/store.
Core: storage subsystem
packages/tm-core/src/storage/*, packages/tm-core/src/storage/file-storage/*
Implements FileStorage (single tasks.json with format handler), ApiStorage (REST with retries/timeouts), StorageFactory (auto/file/api selection), utilities (atomic ops, paths).
Core: services & providers
packages/tm-core/src/services/*, packages/tm-core/src/providers/*, tests tests/unit/*.test.ts, tests/integration/list-tasks.test.ts
Adds TaskService (business logic for lists/stats/filters), AI base provider and test mock; smoke and base-provider tests; list-tasks integration test.
Core: logger & errors & parser & utils
packages/tm-core/src/logger/*, packages/tm-core/src/errors/*, packages/tm-core/src/parser/index.ts, packages/tm-core/src/utils/*, packages/tm-core/src/clients/*
Adds structured error system, logger with env-driven levels, placeholder parser, ID generators, Supabase client, and public barrels.
Workflow engine package
packages/workflow-engine/src/**/*, packages/workflow-engine/package.json
Adds TaskExecutionManager orchestrating WorktreeManager, ProcessSandbox, WorkflowStateManager; types, errors, barrels; package config/build.
Core–workflow bridge
packages/tm-core/src/workflow/*
Exposes WorkflowService that wraps the workflow engine for core consumers.
MCP server adjustments
mcp-server/src/index.js, mcp-server/src/tools/utils.js
Switches package.json reads to JSON module imports; removes sync FS reads for version info.
Scripts refactor & assets
scripts/modules/commands.js, scripts/modules/config-manager.js, scripts/modules/prompt-manager.js, scripts/modules/update-config-tokens.js, scripts/init.js, src/utils/asset-resolver.js, src/utils/rule-transformer.js, tests/integration/profiles/*
Moves command registration to @tm/cli; replaces FS JSON reads with JSON imports; reworks prompt loading/validation to in-memory; introduces asset resolver; adapts rule transformer and tests to new assets/dist layout.
Extension formatting-only
apps/extension/package.json
Reformats arrays to single-line; no behavior changes.
tm-core package scaffolding
packages/tm-core/.gitignore, packages/tm-core/tsup.config.ts
Adds ignore rules; build config with env injection of TM_PUBLIC_* variables.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant CLI as @tm/cli (ListTasksCommand)
  participant Core as TaskMasterCore
  participant Service as TaskService
  participant Factory as StorageFactory
  participant Storage as FileStorage/ApiStorage

  User->>CLI: tm list [--status --tag ...]
  CLI->>Core: getTaskList(options)
  Core->>Service: getTaskList(options)
  Service->>Factory: create(config, projectPath)
  Factory-->>Storage: IStorage instance
  Service->>Storage: loadTasks(tag)
  Storage-->>Service: Task[]
  Service-->>Core: {tasks,total,filtered,tag,storageType}
  Core-->>CLI: ListTasksResult
  CLI->>User: Render table/json
Loading
sequenceDiagram
  autonumber
  actor User
  participant CLI as @tm/cli (AuthCommand)
  participant Auth as AuthManager
  participant OAuth as OAuthService
  participant Supa as SupabaseAuthClient
  participant Store as CredentialStore

  User->>CLI: tm auth login
  CLI->>Auth: authenticateWithOAuth({openBrowser})
  Auth->>OAuth: authenticate(options)
  OAuth->>OAuth: start local callback server
  OAuth-->>CLI: onAuthUrl(url)
  CLI->>User: Opens browser to URL
  Supa-->>OAuth: Callback with token
  OAuth->>Supa: getUser(token)
  OAuth->>Store: saveCredentials({token,refreshToken,expiresAt,...})
  OAuth-->>Auth: AuthCredentials
  Auth-->>CLI: AuthCredentials
  CLI->>User: Success + status
Loading
sequenceDiagram
  autonumber
  actor User
  participant CLI as @tm/cli (WorkflowStartCommand)
  participant Core as TaskMasterCore
  participant Wf as WorkflowService
  participant Eng as TaskExecutionManager
  participant WT as WorktreeManager
  participant PS as ProcessSandbox
  participant WS as WorkflowStateManager

  User->>CLI: tm workflow start <taskId>
  CLI->>Core: workflow.start(taskId, opts)
  Core->>Wf: start(taskId, opts)
  Wf->>Eng: startTaskExecution(Task, opts)
  Eng->>WT: createWorktree(taskId, branch)
  WT-->>Eng: WorktreeInfo
  Eng->>WS: registerWorkflow(ctx)
  Eng->>PS: startProcess(workflowId, taskId, prompt, opts)
  PS-->>Eng: pid
  Eng-->>Wf: workflowId
  Wf-->>Core: workflowId
  Core-->>CLI: workflowId
  CLI->>User: Show run info
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120–180 minutes

Possibly related PRs

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "ralph/feat.add.workflow" is related to the changeset (the PR adds workflow-related packages and CLI workflow commands) but is a branch-style identifier rather than a concise, human-readable single-sentence summary; it therefore only partially meets the guideline for a clear PR title.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.

✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ralph/feat.add.workflow

Comment @coderabbitai help to get the list of available commands and usage tips.

@Crunchyman-ralph Crunchyman-ralph changed the base branch from main to next September 11, 2025 17:51
@Crunchyman-ralph
Copy link
Collaborator Author

duplicate of #1198

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 281

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (21)
scripts/init.js (1)

620-637: Inconsistent asset loading for .gitignore; use resolver to avoid path breakage.

This still relies on __dirname/../assets/gitignore. In bundled/npm installs this path may not exist; use readAsset('gitignore') for consistency.

-  try {
-    const gitignoreTemplatePath = path.join(
-      __dirname,
-      '..',
-      'assets',
-      'gitignore'
-    );
-    const templateContent = fs.readFileSync(gitignoreTemplatePath, 'utf8');
+  try {
+    const templateContent = readAsset('gitignore', 'utf8');
     manageGitignoreFile(
       path.join(targetDir, GITIGNORE_FILE),
       templateContent,
       storeTasksInGit,
       log
     );
src/utils/asset-resolver.js (1)

1-93: Add tests to satisfy utils coverage policy.

Per guidelines, utils need ≥90% coverage. Add unit tests for: candidate resolution order, find-up fallback, traversal guard, and Buffer vs utf8 reads.

I can add a Jest/Vitest suite hitting these branches and a fixture assets tree; want me to open a follow-up PR?

scripts/modules/prompt-manager.js (4)

82-123: Ensure loadPrompt is made async if implementing lazy loading.

If you implement the suggested lazy loading approach, this method needs to become async to handle dynamic imports.

If implementing lazy loading:

-loadPrompt(promptId, variables = {}, variantKey = null) {
+async loadPrompt(promptId, variables = {}, variantKey = null) {
	try {
		// ... cache check ...
		
		// Load template
-		const template = this.loadTemplate(promptId);
+		const template = await this.loadTemplate(promptId);

273-286: ⚠️ Security concern: Using new Function for condition evaluation is risky.

The use of new Function to evaluate conditions poses security risks as it could execute arbitrary code if the condition strings are not properly sanitized. This is especially concerning if conditions could come from user input or external sources.

Replace the unsafe evaluation with a safer approach:

evaluateCondition(condition, variables) {
	try {
-		// Create a safe evaluation context
-		const context = { ...variables };
-		
-		// Simple condition evaluation (can be enhanced)
-		// For now, supports basic comparisons
-		const func = new Function(...Object.keys(context), `return ${condition}`);
-		return func(...Object.values(context));
+		// Parse and evaluate conditions safely
+		// Support common patterns like: varName === "value", varName > 5, etc.
+		const comparisonRegex = /^(\w+(?:\.\w+)*)\s*(===?|!==?|>=?|<=?)\s*(.+)$/;
+		const match = condition.trim().match(comparisonRegex);
+		
+		if (match) {
+			const [, path, operator, rawValue] = match;
+			const leftValue = this.getNestedValue(variables, path);
+			
+			// Parse the right value (handle strings, numbers, booleans)
+			let rightValue = rawValue.trim();
+			if (rightValue.startsWith('"') && rightValue.endsWith('"')) {
+				rightValue = rightValue.slice(1, -1);
+			} else if (rightValue === 'true') {
+				rightValue = true;
+			} else if (rightValue === 'false') {
+				rightValue = false;
+			} else if (!isNaN(rightValue)) {
+				rightValue = parseFloat(rightValue);
+			}
+			
+			// Evaluate based on operator
+			switch (operator) {
+				case '===':
+				case '==':
+					return leftValue === rightValue;
+				case '!==':
+				case '!=':
+					return leftValue !== rightValue;
+				case '>':
+					return leftValue > rightValue;
+				case '>=':
+					return leftValue >= rightValue;
+				case '<':
+					return leftValue < rightValue;
+				case '<=':
+					return leftValue <= rightValue;
+				default:
+					return false;
+			}
+		}
+		
+		// Handle simple boolean checks
+		const value = this.getNestedValue(variables, condition);
+		return !!value;
	} catch (error) {
		log('warn', `Failed to evaluate condition: ${condition}`);
		return false;
	}
}

409-434: Consider making validateAllPrompts async for lazy loading compatibility.

If implementing lazy loading, this method needs to handle async template loading.

-validateAllPrompts() {
+async validateAllPrompts() {
	const results = { total: 0, errors: [], valid: [] };
	
-	// Iterate through all imported prompts
-	for (const [promptId, template] of this.prompts.entries()) {
+	// Validate all available prompts (load them if using lazy loading)
+	const promptIds = this.promptPaths ? Object.keys(this.promptPaths) : Array.from(this.prompts.keys());
+	
+	for (const promptId of promptIds) {
		results.total++;
		
		try {
+			// Load template if using lazy loading
+			const template = this.promptPaths 
+				? await this.loadTemplate(promptId)
+				: this.prompts.get(promptId);
+			
			// Validate the template
			if (this.validatePrompt && this.validatePrompt !== true) {

529-534: Consider async initialization for the singleton if implementing lazy loading.

If adopting the lazy loading approach, the singleton getter might need to handle async initialization.

-export function getPromptManager() {
+export async function getPromptManager() {
	if (!promptManager) {
		promptManager = new PromptManager();
+		// Optionally preload critical prompts
+		// await promptManager.preloadCriticalPrompts();
	}
	return promptManager;
}
scripts/modules/config-manager.js (6)

686-706: Normalize provider name and unify “no API key” handling in one place

Avoid case-sensitivity pitfalls and the split special-case for claude-code. Also reuse a single early-return path.

Apply:

 function isApiKeySet(providerName, session = null, projectRoot = null) {
-  // Define the expected environment variable name for each provider
-
-  // Providers that don't require API keys for authentication
-  const providersWithoutApiKeys = [
-    CUSTOM_PROVIDERS.OLLAMA,
-    CUSTOM_PROVIDERS.BEDROCK,
-    CUSTOM_PROVIDERS.MCP,
-    CUSTOM_PROVIDERS.GEMINI_CLI
-  ];
-
-  if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
-    return true; // Indicate key status is effectively "OK"
-  }
-
-  // Claude Code doesn't require an API key
-  if (providerName?.toLowerCase() === 'claude-code') {
-    return true; // No API key needed
-  }
+  const normalized = providerName?.toLowerCase();
+  // Providers that don't require API keys for authentication
+  const providersWithoutApiKeys = [
+    CUSTOM_PROVIDERS.OLLAMA,
+    CUSTOM_PROVIDERS.BEDROCK,
+    CUSTOM_PROVIDERS.MCP,
+    CUSTOM_PROVIDERS.GEMINI_CLI,
+    'claude-code'
+  ];
+  if (providersWithoutApiKeys.includes(normalized)) {
+    return true; // Indicate key status is effectively "OK"
+  }

   const keyMap = {
@@
-  const providerKey = providerName?.toLowerCase();
+  const providerKey = normalized;
   if (!providerKey || !keyMap[providerKey]) {
     log('warn', `Unknown provider name: ${providerName} in isApiKeySet check.`);
     return false;
   }

Also applies to: 723-739


748-758: Early-return for MCP providers without keys + case normalization

Mirror isApiKeySet semantics and make the switch-case robust to casing.

Apply:

-function getMcpApiKeyStatus(providerName, projectRoot = null) {
+function getMcpApiKeyStatus(providerName, projectRoot = null) {
+  const provider = providerName?.toLowerCase();
+  // Providers that effectively don't need API keys checked in MCP config
+  if (
+    provider === 'claude-code' ||
+    provider === CUSTOM_PROVIDERS.OLLAMA ||
+    provider === CUSTOM_PROVIDERS.GEMINI_CLI ||
+    provider === CUSTOM_PROVIDERS.MCP ||
+    provider === CUSTOM_PROVIDERS.BEDROCK
+  ) {
+    return true;
+  }
@@
-    switch (providerName) {
+    switch (provider) {
       case 'anthropic':
         apiKeyToCheck = mcpEnv.ANTHROPIC_API_KEY;
         placeholderValue = 'YOUR_ANTHROPIC_API_KEY_HERE';
         break;

Also applies to: 777-829


148-177: Also validate model IDs against MODEL_MAP (non-strict, warn only)

You validate providers but not provider/model pairs. Add a warning to help users catch typos and deprecated IDs earlier.

Apply:

       if (
         config.models.fallback?.provider &&
         !validateProvider(config.models.fallback.provider)
       ) {
@@
         config.models.fallback.modelId = undefined;
       }
+      // Warn on unknown model IDs for known providers (non-strict)
+      const rolesToCheck = ['main', 'research', 'fallback'];
+      for (const roleName of rolesToCheck) {
+        const roleCfg = config.models[roleName];
+        if (
+          roleCfg?.provider &&
+          roleCfg?.modelId &&
+          !validateProviderModelCombination(roleCfg.provider, roleCfg.modelId)
+        ) {
+          console.warn(
+            chalk.yellow(
+              `Warning: Unknown modelId "${roleCfg.modelId}" for provider "${roleCfg.provider}" in ${roleName} config.`
+            )
+          );
+        }
+      }

41-46: Fix misleading comment about fallback defaults

The code sets a concrete fallback by default; the comment says otherwise.

Apply:

-      // No default fallback provider/model initially
+      // Default fallback provider/model

233-255: Cache default-loaded config to avoid repeated disk reads

Currently, calling getConfig() without explicitRoot reloads every time. Safe to cache when first loaded.

Apply:

   if (needsLoad) {
     const newConfig = _loadAndValidateConfig(explicitRoot); // _load handles null explicitRoot
@@
-    if (forceReload || explicitRoot) {
+    if (forceReload || explicitRoot || !loadedConfig) {
       loadedConfig = newConfig;
       loadedConfigRoot = explicitRoot; // Store the root used for this loaded config
     }
     return newConfig; // Return the newly loaded/default config
   }

967-968: Use consistent log level token

Elsewhere you use 'warn'; here it's 'warning'.

Apply:

-      log(
-        'warning',
+      log(
+        'warn',
         'Failed to write updated configuration with new userId. Please let the developers know.'
       );
scripts/modules/update-config-tokens.js (1)

27-35: Type-safe check and cap maxTokens to model limit (don’t raise it)

Match getParametersForRole semantics: use the model limit as an upper bound, not an unconditional overwrite.

Apply:

-          if (modelData && modelData.max_tokens) {
-            config.models[role].maxTokens = modelData.max_tokens;
-          }
+          const maxTokens = Number(modelData?.max_tokens);
+          if (Number.isFinite(maxTokens) && maxTokens > 0) {
+            const current = Number(config.models[role].maxTokens);
+            config.models[role].maxTokens = Number.isFinite(current)
+              ? Math.min(current, maxTokens)
+              : maxTokens;
+          }
tests/integration/profiles/roo-files-inclusion.test.js (1)

72-77: Reduce brittleness of string-based source assertions.

Asserting on literal log strings in source is fragile. Prefer importing the module and validating exported functions/behavior.

tests/integration/profiles/rules-files-inclusion.test.js (1)

77-83: Import path assertion is acceptable but brittle.

Consider validating behavior (e.g., exported ROO_MODES usage) instead of source string matching.

package.json (1)

103-105: Bump engines.node to >=22.0.0 and align CI to Node 22+

Repo uses "import ... with { type: 'json' }" (requires Node ≥22); engines remain >=18 and GitHub Actions use Node 20 — update to avoid CI/runtime failures.

  • Replace in package.json files: package.json (root), apps/cli/package.json, packages/tm-core/package.json — change "node": ">=18.0.0" → "node": ">=22.0.0".
  • Update all .github/workflows/* entries that set node-version: 20 to node-version: 22 (examples: .github/workflows/ci.yml, extension-ci.yml, release.yml, auto-close-duplicates.yml, backfill-duplicate-comments.yml, update-models-md.yml).
  • No .nvmrc or .tool-versions found — verify local/dev manager configs if used.
  • Evidence (JSON import usage): packages/tm-core/src/auth/oauth-service.ts, scripts/modules/update-config-tokens.js, scripts/modules/config-manager.js, mcp-server/src/index.js, mcp-server/src/tools/utils.js.

Diff (apply to each package.json engines block):

-  "engines": {
-    "node": ">=18.0.0"
-  },
+  "engines": {
+    "node": ">=22.0.0"
+  },
mcp-server/src/tools/utils.js (2)

287-325: Include telemetry in standard MCP responses.

Guidelines require telemetryData to reach clients from tools. handleApiResult currently drops it. Add passthrough when present.

 async function handleApiResult(
   result,
   log,
   errorPrefix = 'API error',
   processFunction = processMCPResponseData,
   projectRoot = null
 ) {
   // Get version info for every response
   const versionInfo = getVersionInfo();

   // Get tag info if project root is provided
   const tagInfo = projectRoot ? getTagInfo(projectRoot, log) : null;

   if (!result.success) {
     const errorMsg = result.error?.message || `Unknown ${errorPrefix}`;
     log.error(`${errorPrefix}: ${errorMsg}`);
     return createErrorResponse(errorMsg, versionInfo, tagInfo);
   }

   // Process the result data if needed
   const processedData = processFunction
     ? processFunction(result.data)
     : result.data;

   log.info('Successfully completed operation');

   // Create the response payload including version info and tag info
   const responsePayload = {
     data: processedData,
     version: versionInfo
   };
+  // Preserve telemetry data when provided by direct functions/tools
+  if (result.telemetryData) {
+    responsePayload.telemetryData = result.telemetryData;
+  }

   // Add tag information if available
   if (tagInfo) {
     responsePayload.tag = tagInfo;
   }

   return createContentResponse(responsePayload);
 }

178-192: Overly verbose session logging risks leaking sensitive paths; downgrade/sanitize.

Log only structure/flags at debug level, avoid raw URIs.

-  log.info(
-    `Session object: ${JSON.stringify({
+  log.debug(
+    `Session meta: ${JSON.stringify({
       hasSession: !!session,
       hasRoots: !!session?.roots,
       rootsType: typeof session?.roots,
       isRootsArray: Array.isArray(session?.roots),
       rootsLength: session?.roots?.length,
-      firstRoot: session?.roots?.[0],
       hasRootsRoots: !!session?.roots?.roots,
       rootsRootsType: typeof session?.roots?.roots,
       isRootsRootsArray: Array.isArray(session?.roots?.roots),
       rootsRootsLength: session?.roots?.roots?.length,
-      firstRootsRoot: session?.roots?.roots?.[0]
+      // Do not log raw URIs; redact potential PII
+      hasFirstRootUri: !!session?.roots?.[0]?.uri,
+      hasAltFirstRootUri: !!session?.roots?.roots?.[0]?.uri
     })}`
   );
mcp-server/src/index.js (1)

118-126: Bug: dereferencing session in the else branch when session is falsy.

This will throw when no session exists.

-    } else {
-      session.server.sendLoggingMessage({
-        data: {
-          context: session.context,
-          message: `No MCP sessions available, providers not registered`
-        },
-        level: 'warn'
-      });
-    }
+    } else {
+      this.logger?.warn('No MCP sessions available, providers not registered');
+    }
packages/tm-core/src/logger/factory.ts (1)

1-60: Remove "./logger" subpath export — restrict logger API to the package root

  • Delete the "./logger" entry from packages/tm-core/package.json exports (it currently exposes src/logger/index.ts / dist/logger/index.js).
  • Keep the re-export in packages/tm-core/src/index.ts (export { getLogger, createLogger, setGlobalLogger } from './logger/index.js';) so consumers access logger only via '@tm/core'.
packages/tm-core/src/interfaces/configuration.interface.ts (1)

404-445: Update default model IDs and MAX_PROMPT_LENGTH

File: packages/tm-core/src/interfaces/configuration.interface.ts (lines 404–445)

  • Anthropic: replace/deprecate "claude-3-5-sonnet-20241022" — Anthropic announced retirement (Aug 13, 2025); migrate to "claude-sonnet-4-20250514" or make the model configurable.
  • OpenAI: "gpt-4o-mini" is superseded by the GPT‑4.1 family (released Apr 14, 2025). gpt-4o-mini context ≈128K tokens; GPT‑4.1 variants support up to 1,000,000 tokens — update default to a current recommended OpenAI model.
  • MAX_PROMPT_LENGTH (100000) may not match provider token caps or units (tokens vs characters) — store and enforce provider-specific token limits and validate inputs at runtime.
  • Add inline links to provider docs and a "last-verified" date (e.g., 2025-09-11).

Comment on lines +1 to +77
{
"meta": {
"generatedAt": "2025-08-06T12:39:03.250Z",
"tasksAnalyzed": 8,
"totalTasks": 11,
"analysisCount": 8,
"thresholdScore": 5,
"projectName": "Taskmaster",
"usedResearch": false
},
"complexityAnalysis": [
{
"taskId": 118,
"taskTitle": "Create AI Provider Base Architecture",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the implementation of BaseProvider abstract TypeScript class into subtasks focusing on: 1) Converting existing JavaScript base-provider.js to TypeScript with proper interface definitions, 2) Implementing the Template Method pattern with abstract methods, 3) Adding comprehensive error handling and retry logic with exponential backoff, 4) Creating proper TypeScript types for all method signatures and options, 5) Setting up comprehensive unit tests with MockProvider. Consider that the existing codebase uses JavaScript ES modules and Vercel AI SDK, so the TypeScript implementation needs to maintain compatibility while adding type safety.",
"reasoning": "This task requires significant architectural work including converting existing JavaScript code to TypeScript, creating new interfaces, implementing design patterns, and ensuring backward compatibility. The existing base-provider.js already implements a sophisticated provider pattern using Vercel AI SDK, so the TypeScript conversion needs careful consideration of type definitions and maintaining existing functionality."
},
{
"taskId": 119,
"taskTitle": "Implement Provider Factory with Dynamic Imports",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Break down the Provider Factory implementation into: 1) Creating the ProviderFactory class structure with proper TypeScript typing, 2) Implementing the switch statement for provider selection logic, 3) Adding dynamic imports for each provider to enable tree-shaking, 4) Handling provider instantiation with configuration passing, 5) Implementing comprehensive error handling for module loading failures. Note that the existing codebase already has a provider selection mechanism in the JavaScript files, so ensure the factory pattern integrates smoothly with existing infrastructure.",
"reasoning": "This is a moderate complexity task that involves creating a factory pattern with dynamic imports. The existing codebase already has provider management logic, so the main complexity is in creating a clean TypeScript implementation with proper dynamic imports while maintaining compatibility with the existing JavaScript module system."
},
{
"taskId": 120,
"taskTitle": "Implement Anthropic Provider",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement the AnthropicProvider class in stages: 1) Set up the class structure extending BaseProvider with proper TypeScript imports and type definitions, 2) Implement constructor with Anthropic SDK client initialization and configuration handling, 3) Implement generateCompletion method with proper message format transformation and error handling, 4) Add token calculation methods and utility functions (getName, getModel, getDefaultModel), 5) Implement comprehensive error handling with custom error wrapping and type exports. The existing anthropic.js provider can serve as a reference but needs to be reimplemented to extend the new TypeScript BaseProvider.",
"reasoning": "This task involves integrating with an external SDK (@anthropic-ai/sdk) and implementing all abstract methods from BaseProvider. The existing JavaScript implementation provides a good reference, but the TypeScript version needs proper type definitions, error handling, and must work with the new abstract base class architecture."
},
{
"taskId": 121,
"taskTitle": "Create Prompt Builder and Task Parser",
"complexityScore": 8,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement PromptBuilder and TaskParser with focus on: 1) Creating PromptBuilder class with template methods for building structured prompts with JSON format instructions, 2) Implementing TaskParser class structure with dependency injection of IAIProvider and IConfiguration, 3) Implementing parsePRD method with file reading, prompt generation, and AI provider integration, 4) Adding task enrichment logic with metadata, validation, and structure verification, 5) Implementing comprehensive error handling for all failure scenarios including file I/O, AI provider errors, and JSON parsing. The existing parse-prd.js provides complex logic that needs to be reimplemented with proper TypeScript types and cleaner architecture.",
"reasoning": "This is a complex task that involves multiple components working together: file I/O, AI provider integration, JSON parsing, and data validation. The existing parse-prd.js implementation is quite sophisticated with Zod schemas and complex task processing logic that needs to be reimplemented in TypeScript with proper separation of concerns."
},
{
"taskId": 122,
"taskTitle": "Implement Configuration Management",
"complexityScore": 6,
"recommendedSubtasks": 5,
"expansionPrompt": "Create ConfigManager implementation focusing on: 1) Setting up Zod validation schema that matches the IConfiguration interface structure, 2) Implementing ConfigManager constructor with default values merging and storage initialization, 3) Creating validate method with Zod schema parsing and user-friendly error transformation, 4) Implementing type-safe get method using TypeScript generics and keyof operator, 5) Adding getAll method and ensuring proper immutability and module exports. The existing config-manager.js has complex configuration loading logic that can inform the TypeScript implementation but needs cleaner architecture.",
"reasoning": "This task involves creating a configuration management system with validation using Zod. The existing JavaScript config-manager.js is quite complex with multiple configuration sources, defaults, and validation logic. The TypeScript version needs to provide a cleaner API while maintaining the flexibility of the current system."
},
{
"taskId": 123,
"taskTitle": "Create Utility Functions and Error Handling",
"complexityScore": 4,
"recommendedSubtasks": 5,
"expansionPrompt": "Implement utilities and error handling in stages: 1) Create ID generation module with generateTaskId and generateSubtaskId functions using proper random generation, 2) Implement base TaskMasterError class extending Error with proper TypeScript typing, 3) Add error sanitization methods to prevent sensitive data exposure in production, 4) Implement development-only logging with environment detection, 5) Create specialized error subclasses (FileNotFoundError, ParseError, ValidationError, APIError) with appropriate error codes and formatting.",
"reasoning": "This is a relatively straightforward task involving utility functions and error class hierarchies. The main complexity is in ensuring proper error sanitization for production use and creating a well-structured error hierarchy that can be used throughout the application."
},
{
"taskId": 124,
"taskTitle": "Implement TaskMasterCore Facade",
"complexityScore": 7,
"recommendedSubtasks": 5,
"expansionPrompt": "Build TaskMasterCore facade implementation: 1) Create class structure with proper TypeScript imports and type definitions for all subsystem interfaces, 2) Implement initialize method for lazy loading AI provider and parser instances based on configuration, 3) Create parsePRD method that coordinates parser, AI provider, and storage subsystems, 4) Implement getTasks and other facade methods for task retrieval and management, 5) Create createTaskMaster factory function and set up all module exports including type re-exports. Ensure proper ESM compatibility with .js extensions in imports.",
"reasoning": "This is a complex integration task that brings together all the other components into a cohesive facade. It requires understanding of the facade pattern, proper dependency management, lazy initialization, and careful module export structure for the public API."
},
{
"taskId": 125,
"taskTitle": "Create Placeholder Providers and Complete Testing",
"complexityScore": 5,
"recommendedSubtasks": 5,
"expansionPrompt": "Complete the implementation with placeholders and testing: 1) Create OpenAIProvider placeholder class extending BaseProvider with 'not yet implemented' errors, 2) Create GoogleProvider placeholder class with similar structure, 3) Implement MockProvider in tests/mocks directory with configurable responses and behavior simulation, 4) Write comprehensive unit tests for TaskParser covering all methods and edge cases, 5) Create integration tests for the complete parse-prd workflow ensuring 80% code coverage. Follow kebab-case naming convention for test files.",
"reasoning": "This task involves creating placeholder implementations and a comprehensive test suite. While the placeholder providers are simple, creating a good MockProvider and comprehensive tests requires understanding the entire system architecture and ensuring all edge cases are covered."
}
]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Generated report — don’t hand-edit

Same note: treat this as generated; keep changes flowing from task-master tooling. If this replaces/duplicates tm-core-complexity.json, consider retaining only one canonical artifact to reduce drift.

🤖 Prompt for AI Agents
.taskmaster/reports/task-complexity-report_tm-core-phase-1.json lines 1-77: This
is a generated artifact that should not be hand-edited and appears to
duplicate/replace tm-core-complexity.json; to fix, remove this generated file
from the PR (undo manual edits), ensure only one canonical complexity report is
committed (either tm-core-complexity.json or this file), and update the
build/tooling or .gitignore to prevent generated reports from being committed in
future so the canonical artifact is produced and tracked by the tooling instead
of being checked in manually.

Comment on lines +7300 to +8057
},
"tm-core-phase-1": {
"tasks": [
{
"id": 115,
"title": "Initialize tm-core Package Structure",
"description": "Create the initial package structure for tm-core with all required directories and configuration files",
"details": "Create the packages/tm-core directory structure with all subdirectories as specified: src/, tests/, and all nested folders. Set up package.json with proper ESM/CJS configuration, tsconfig.json with strict TypeScript settings, tsup.config.js for dual format builds, and jest.config.js for testing. Ensure all barrel export files (index.ts) are created in each directory for clean imports.",
"testStrategy": "Verify directory structure matches specification exactly, ensure all configuration files are valid JSON/JS, run 'npm install' to verify package.json is correct, run 'tsc --noEmit' to verify TypeScript configuration",
"priority": "high",
"dependencies": [],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Create tm-core directory structure and base configuration files",
"description": "Set up the packages/tm-core directory with all required subdirectories and initialize core configuration files",
"dependencies": [],
"details": "Create packages/tm-core directory with subdirectories: src/, src/types/, src/interfaces/, src/providers/, src/parsers/, src/builders/, src/utils/, src/errors/, tests/, tests/unit/, tests/integration/, tests/mocks/. Create package.json with name '@task-master/tm-core', version '1.0.0', type 'module', main/module/types fields for dual ESM/CJS support, and necessary dependencies (typescript, tsup, jest, @types/node). Set up tsconfig.json with strict mode, ES2022 target, module resolution, and proper include/exclude patterns.\n<info added on 2025-08-06T10:49:59.891Z>\nImplementation completed as specified. Directory structure verified with all paths created correctly. Package.json configured with dual ESM/CJS support using tsup build tool, exports field properly set for both formats. TypeScript configuration established with strict mode enabled, ES2022 target for modern JavaScript features, and path mappings configured for clean imports like '@/types' and '@/utils'. All configuration files are valid and ready for development.\n</info added on 2025-08-06T10:49:59.891Z>",
"status": "done",
"testStrategy": "Verify directory structure with fs.existsSync() checks, validate package.json structure with JSON.parse(), ensure tsconfig.json compiles without errors"
},
{
"id": 2,
"title": "Configure build and test infrastructure",
"description": "Set up tsup build configuration for dual format support and Jest testing configuration",
"dependencies": [
"115.1"
],
"details": "Create tsup.config.js with dual format configuration (ESM and CJS), entry points from src/index.ts, declaration files generation, and sourcemaps. Configure jest.config.js with TypeScript preset, ESM support, proper module name mapping, coverage thresholds (80%), and test environment setup. Create .gitignore for node_modules, dist, and coverage directories. Add npm scripts in package.json for build, test, test:watch, and test:coverage commands.\n<info added on 2025-08-06T10:50:49.396Z>\nBuild process successfully configured with tsup.config.ts (TypeScript configuration file instead of JavaScript) supporting dual format output and multiple entry points including submodules. Jest configuration established with comprehensive ESM support and path alias mapping. Created tests/setup.ts for centralized test environment configuration. Added ES2022 compilation target for modern JavaScript features. Enhanced .gitignore to exclude additional development-specific files beyond the basic directories.\n</info added on 2025-08-06T10:50:49.396Z>",
"status": "done",
"testStrategy": "Run 'npm run build' to verify tsup configuration works, execute 'npm test' with a simple test file to confirm Jest setup, check that both .mjs and .cjs files are generated in dist/"
},
{
"id": 3,
"title": "Create barrel export files for all directories",
"description": "Implement index.ts files in each directory to enable clean imports throughout the package",
"dependencies": [
"115.1"
],
"details": "Create index.ts in src/ that exports from all subdirectories. Create index.ts in each subdirectory (types/, interfaces/, providers/, parsers/, builders/, utils/, errors/) with appropriate exports. For now, add placeholder comments indicating what will be exported from each module. Ensure proper export syntax for TypeScript types and interfaces using 'export type' where appropriate. Structure exports to allow consumers to import like '@task-master/tm-core/types' or from the main entry point.\n<info added on 2025-08-06T10:51:56.837Z>\nImplementation complete. All barrel export files have been created successfully with:\n\n- Main src/index.ts exporting from all subdirectories with proper TypeScript syntax\n- Individual index.ts files in types/, providers/, storage/, parser/, utils/, and errors/ directories\n- Proper ES module syntax with .js extensions for TypeScript compatibility\n- Placeholder exports with @deprecated JSDoc tags to indicate future implementation\n- Clean module structure supporting both root imports and submodule imports like '@task-master/tm-core/types'\n- All files include appropriate documentation comments explaining their purpose\n</info added on 2025-08-06T10:51:56.837Z>",
"status": "done",
"testStrategy": "Compile with TypeScript to ensure all index.ts files are valid, verify no circular dependencies exist, check that imports from package root work correctly"
},
{
"id": 4,
"title": "Add development tooling and documentation",
"description": "Set up development tools, linting, and initial documentation structure",
"dependencies": [
"115.1",
"115.2"
],
"details": "Create .eslintrc.js with TypeScript plugin and recommended rules for consistent code style. Add prettier configuration for code formatting. Create README.md with package overview, installation instructions, and usage examples (marked as 'coming soon'). Add CHANGELOG.md to track version changes. Create npm scripts for linting and formatting. Add pre-commit hooks configuration if needed. Document the dual ESM/CJS support in README.\n<info added on 2025-08-06T10:53:45.056Z>\nI'll analyze the user's request and the context to determine what new information should be added to the subtask's details.Successfully completed development tooling and documentation setup. Created .eslintrc.js with TypeScript plugin and comprehensive rules including no-explicit-any, consistent-type-imports, and proper TypeScript checks. Added .prettierrc.json with sensible defaults for consistent code formatting. Created comprehensive README.md with package overview, installation instructions, usage examples for both ESM and CommonJS, modular imports, architecture description, development setup, and detailed roadmap for tasks 116-125. Added CHANGELOG.md following Keep a Changelog format with current package status and planned features. All development tooling is configured and ready for use.\n</info added on 2025-08-06T10:53:45.056Z>",
"status": "done",
"testStrategy": "Run eslint on sample TypeScript files, verify prettier formats code consistently, ensure all npm scripts execute without errors"
},
{
"id": 5,
"title": "Validate package structure and prepare for development",
"description": "Perform final validation of the package structure and ensure it's ready for implementation",
"dependencies": [
"115.1",
"115.2",
"115.3",
"115.4"
],
"details": "Run 'npm install' to ensure all dependencies are properly resolved. Execute 'tsc --noEmit' to verify TypeScript configuration is correct. Create a simple smoke test in tests/ that imports from the package to verify module resolution works. Ensure the package can be linked locally for testing in other projects. Verify that both CommonJS and ESM imports work correctly. Create a checklist in README for remaining implementation tasks based on tasks 116-125.\n<info added on 2025-08-06T11:02:21.457Z>\nSuccessfully validated package structure with comprehensive testing. All validations passed: npm install resolved dependencies without issues, TypeScript compilation (tsc --noEmit) showed no errors, and dual-format build (npm run build) successfully generated both ESM and CJS outputs with proper TypeScript declarations. Created and executed comprehensive smoke test suite covering all module imports, placeholder functionality, and type definitions - all 8 tests passing. Code quality tools (ESLint, Prettier) are properly configured and show no issues. Package is confirmed ready for local linking and supports both CommonJS and ESM import patterns. README updated with implementation checklist marking Task 115 as complete and clearly outlining remaining implementation tasks 116-125. Package structure validation is complete and development environment is fully prepared for core implementation phase.\n</info added on 2025-08-06T11:02:21.457Z>",
"status": "done",
"testStrategy": "Successfully run all build and test commands, verify package can be imported in both ESM and CJS test files, ensure TypeScript compilation produces no errors, confirm all directories contain appropriate index.ts files"
}
]
},
{
"id": 116,
"title": "Define Core TypeScript Types and Interfaces",
"description": "Create all TypeScript type definitions and interfaces for the tm-core package",
"details": "Create types/index.ts with Task, Subtask, TaskMetadata interfaces and type literals (TaskStatus, TaskPriority, TaskComplexity). Create all interface files: storage.interface.ts with IStorage methods, ai-provider.interface.ts with IAIProvider and AIOptions, configuration.interface.ts with IConfiguration. Use strict typing throughout, no 'any' types allowed. Follow naming conventions: interfaces prefixed with 'I', type literals in PascalCase.",
"testStrategy": "Compile with TypeScript to ensure no type errors, create mock implementations to verify interfaces are complete, use type checking in IDE to confirm all required properties are defined",
"priority": "high",
"dependencies": [
115
],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Create Core Task and Subtask Type Definitions",
"description": "Create types/index.ts with fundamental Task and Subtask interfaces including all required properties",
"dependencies": [],
"details": "Create types/index.ts file. Define Task interface with properties: id (string), title (string), description (string), status (TaskStatus), priority (TaskPriority), dependencies (string[]), details (string), testStrategy (string), subtasks (Subtask[]). Define Subtask interface extending Task but with numeric id. Define TaskMetadata interface with version (string), lastModified (string), taskCount (number), completedCount (number). Export all interfaces.\n<info added on 2025-08-06T11:03:44.220Z>\nImplementation completed with comprehensive type system. Created all required interfaces with strict typing, added optional properties for enhanced functionality (createdAt, updatedAt, effort, actualEffort, tags). Implemented utility types for create/update operations with proper type constraints. Added filter interfaces for advanced querying. Included runtime type guards for safe type narrowing. Successfully compiled without TypeScript errors, ready for integration with storage and AI provider implementations.\n</info added on 2025-08-06T11:03:44.220Z>",
"status": "done",
"testStrategy": "Compile TypeScript files to ensure no type errors, create sample objects conforming to interfaces to verify completeness"
},
{
"id": 2,
"title": "Define Type Literals and Enums",
"description": "Create all type literal definitions for TaskStatus, TaskPriority, and TaskComplexity in the types file",
"dependencies": [
"116.1"
],
"details": "In types/index.ts, define type literals: TaskStatus = 'pending' | 'in-progress' | 'done' | 'deferred' | 'cancelled' | 'blocked'; TaskPriority = 'low' | 'medium' | 'high' | 'critical'; TaskComplexity = 'simple' | 'moderate' | 'complex' | 'very-complex'. Consider using const assertions for better type inference. Export all type literals.\n<info added on 2025-08-06T11:04:04.675Z>\nType literals were already implemented in subtask 116.1 as part of the comprehensive type system. The types/index.ts file includes all required type literals: TaskStatus with values 'pending' | 'in-progress' | 'done' | 'deferred' | 'cancelled' | 'blocked' | 'review', TaskPriority with values 'low' | 'medium' | 'high' | 'critical', and TaskComplexity with values 'simple' | 'moderate' | 'complex' | 'very-complex'. All type literals are properly exported and include comprehensive JSDoc documentation. TypeScript compilation verified the types work correctly.\n</info added on 2025-08-06T11:04:04.675Z>",
"status": "done",
"testStrategy": "Use TypeScript compiler to verify type literals work correctly, test with invalid values to ensure type checking catches errors"
},
{
"id": 3,
"title": "Create Storage Interface Definition",
"description": "Create storage.interface.ts with IStorage interface defining all storage operation methods",
"dependencies": [
"116.1"
],
"details": "Create interfaces/storage.interface.ts file. Define IStorage interface with methods: loadTasks(tag?: string): Promise<Task[]>; saveTasks(tasks: Task[], tag?: string): Promise<void>; appendTasks(tasks: Task[], tag?: string): Promise<void>; updateTask(taskId: string, updates: Partial<Task>, tag?: string): Promise<void>; deleteTask(taskId: string, tag?: string): Promise<void>; exists(tag?: string): Promise<boolean>. Import Task type from types/index.ts.\n<info added on 2025-08-06T11:05:00.573Z>\nImplementation completed successfully. Extended IStorage interface beyond original specification to include metadata operations (loadMetadata, saveMetadata), tag management (getAllTags, deleteTag, renameTag, copyTag), and lifecycle methods (initialize, close, getStats). Added StorageStats interface for monitoring storage metrics and StorageConfig interface for configuration options. Implemented BaseStorage abstract class that provides common functionality including task validation using validateTask method, tag sanitization with sanitizeTag to ensure valid filenames, and backup path generation through getBackupPath for data safety. The abstract class serves as a foundation for concrete storage implementations, reducing code duplication and ensuring consistent behavior across different storage backends. All methods properly typed with async/await patterns and comprehensive error handling considerations.\n</info added on 2025-08-06T11:05:00.573Z>",
"status": "done",
"testStrategy": "Create mock implementation of IStorage to verify all methods are properly typed, ensure Promise return types are correct"
},
{
"id": 4,
"title": "Create AI Provider Interface Definition",
"description": "Create ai-provider.interface.ts with IAIProvider interface and AIOptions type",
"dependencies": [
"116.1"
],
"details": "Create interfaces/ai-provider.interface.ts file. Define AIOptions interface with properties: temperature (number), maxTokens (number), stream (boolean), topP (number), frequencyPenalty (number). Define IAIProvider interface with methods: generateCompletion(prompt: string, options?: AIOptions): Promise<string>; calculateTokens(text: string): number; getName(): string; getModel(): string; getDefaultModel(): string; isAvailable(): Promise<boolean>.\n<info added on 2025-08-06T11:06:15.795Z>\nFile successfully updated with expanded interface implementation details including comprehensive method signatures and supporting interfaces: AIOptions with full parameter set (temperature, maxTokens, stream, model, topP, topK, frequencyPenalty, presencePenalty, stopSequences, systemPrompt), AIResponse structure with content and usage metadata, AIModel interface for model information, ProviderInfo for capability tracking, ProviderUsageStats for usage monitoring, AIProviderConfig for initialization, and additional interfaces for streaming support. Documented BaseAIProvider abstract class implementation with validation, usage tracking, and common utility methods. All interfaces properly typed with strict TypeScript patterns and async/await support. No compilation errors.\n</info added on 2025-08-06T11:06:15.795Z>",
"status": "done",
"testStrategy": "Create stub implementation to verify interface completeness, test optional parameters work correctly"
},
{
"id": 5,
"title": "Create Configuration Interface Definition",
"description": "Create configuration.interface.ts with IConfiguration interface for all config options",
"dependencies": [
"116.1",
"116.2"
],
"details": "Create interfaces/configuration.interface.ts file. Define IConfiguration interface with properties: projectPath (string), aiProvider (string), apiKeys (Record<string, string>), models (object with main, research, fallback as strings), enableTags (boolean), defaultTag (string), maxConcurrentTasks (number), retryAttempts (number), retryDelay (number). Import necessary types from types/index.ts. Ensure all properties have appropriate types with no 'any' usage.\n<info added on 2025-08-06T11:07:43.367Z>\nImplementation completed successfully. Created comprehensive configuration system with:\n\n- Core IConfiguration interface with all required properties: projectPath, aiProvider, apiKeys, models configuration, providers settings, tasks management, tags configuration, storage options, retry behavior, logging preferences, and security settings\n- Supporting interfaces for each configuration section: ModelConfig for AI model selection, ProviderConfig for API provider settings, TaskSettings for task management options, TagSettings for tag-based organization, StorageSettings for persistence configuration, RetrySettings for error handling, LoggingSettings for debugging options, SecuritySettings for API key management\n- Configuration management system with IConfigurationFactory for creating configs from various sources (file, environment, defaults) and IConfigurationManager for runtime config operations including loading, saving, validation, watching for changes, and merging configurations\n- Validation system with ConfigValidationResult interface for detailed error reporting, ConfigSchema for JSON schema validation, and EnvironmentConfig for environment variable mapping\n- DEFAULT_CONFIG_VALUES constant providing sensible defaults for all configuration options\n- All interfaces properly typed with strict TypeScript typing, no 'any' usage, proper imports from types/index\n- Successfully exported all interfaces through main index.ts for package consumers\n- TypeScript compilation confirmed passing without any type errors\n</info added on 2025-08-06T11:07:43.367Z>",
"status": "done",
"testStrategy": "Create sample configuration objects to verify interface covers all needed options, test with partial configs to ensure optional properties work"
}
]
},
{
"id": 117,
"title": "Implement Storage Layer with Repository Pattern",
"description": "Create FileStorage class implementing IStorage interface for task persistence",
"details": "Implement FileStorage class in storage/file-storage.ts following Repository pattern. Constructor accepts projectPath, private basePath property set to {projectPath}/.taskmaster. Implement all IStorage methods: loadTasks, saveTasks, appendTasks, updateTask, deleteTask, exists. Handle file operations with proper error handling (ENOENT returns empty arrays). Use JSON format with tasks array and metadata object containing version and lastModified. Create getTasksPath method to handle tag-based file paths.",
"testStrategy": "Unit test all FileStorage methods with mock file system, test error scenarios (missing files, invalid JSON), verify tag-based path generation, test concurrent operations, ensure proper directory creation",
"priority": "high",
"dependencies": [
116
],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Create FileStorage class structure and constructor",
"description": "Set up the FileStorage class skeleton with proper TypeScript typing and implement the constructor that accepts projectPath parameter",
"dependencies": [],
"details": "Create storage/file-storage.ts file. Import necessary Node.js modules (fs/promises, path). Import IStorage interface and Task type from types. Define FileStorage class implementing IStorage. Create constructor accepting projectPath string parameter. Initialize private basePath property as `${projectPath}/.taskmaster`. Add private property for managing file locks if needed for concurrent operations.",
"status": "done",
"testStrategy": "Unit test constructor initialization, verify basePath is correctly set, test with various projectPath inputs including edge cases"
},
{
"id": 2,
"title": "Implement file path management and helper methods",
"description": "Create internal helper methods for managing file paths and ensuring directory structure exists",
"dependencies": [
"117.1"
],
"details": "Implement private getTasksPath(tag?: string) method that returns path to tasks file based on optional tag parameter. If tag provided, return `${basePath}/tasks/${tag}.json`, otherwise `${basePath}/tasks/tasks.json`. Create private ensureDirectoryExists() method that creates .taskmaster and tasks directories if they don't exist using fs.mkdir with recursive option. Add private method for safe JSON parsing with error handling.",
"status": "done",
"testStrategy": "Test getTasksPath with and without tags, verify directory creation works recursively, test JSON parsing with valid and invalid data"
},
{
"id": 3,
"title": "Implement read operations: loadTasks and exists",
"description": "Implement methods for reading tasks from the file system with proper error handling",
"dependencies": [
"117.2"
],
"details": "Implement loadTasks(tag?: string) method: use getTasksPath to get file path, read file using fs.readFile, parse JSON content, return tasks array from parsed data. Handle ENOENT error by returning empty array. Handle JSON parse errors appropriately. Implement exists(tag?: string) method: use fs.access to check if file exists at getTasksPath location, return boolean result.",
"status": "done",
"testStrategy": "Test loadTasks with existing files, missing files (ENOENT), corrupted JSON files. Test exists method with present and absent files"
},
{
"id": 4,
"title": "Implement write operations: saveTasks and appendTasks",
"description": "Implement methods for persisting tasks to the file system with metadata",
"dependencies": [
"117.3"
],
"details": "Implement saveTasks(tasks: Task[], tag?: string) method: ensure directory exists, create data object with tasks array and metadata object containing version (e.g., '1.0.0') and lastModified (ISO timestamp). Write to file using fs.writeFile with JSON.stringify and proper formatting. Implement appendTasks(tasks: Task[], tag?: string) method: load existing tasks, merge with new tasks (avoiding duplicates by ID), call saveTasks with merged array.",
"status": "done",
"testStrategy": "Test saveTasks creates files with correct structure, verify metadata is included, test appendTasks merges correctly without duplicates"
},
{
"id": 5,
"title": "Implement update and delete operations",
"description": "Implement methods for modifying and removing individual tasks with atomic operations",
"dependencies": [
"117.4"
],
"details": "Implement updateTask(taskId: string, updates: Partial<Task>, tag?: string) method: load tasks, find task by ID, merge updates using object spread, save updated tasks array. Return boolean indicating success. Implement deleteTask(taskId: string, tag?: string) method: load tasks, filter out task with matching ID, save filtered array. Return boolean indicating if task was found and deleted. Ensure both operations are atomic using temporary files if needed.",
"status": "done",
"testStrategy": "Test updateTask with existing and non-existing tasks, verify partial updates work correctly. Test deleteTask removes correct task, handles missing tasks gracefully"
}
]
},
{
"id": 118,
"title": "Create AI Provider Base Architecture",
"description": "Implement abstract BaseProvider class and provider interfaces using Template Method pattern",
"details": "Convert existing base-provider.js to TypeScript abstract class BaseProvider implementing IAIProvider. Add protected properties for apiKey and model. Create abstract methods: generateCompletion, calculateTokens, getName, getModel, getDefaultModel. Apply Template Method pattern for common provider logic like error handling and retry logic. Ensure proper type safety throughout.",
"testStrategy": "Create MockProvider extending BaseProvider to test abstract class functionality, verify all abstract methods are properly defined, test error handling and common logic",
"priority": "high",
"dependencies": [
116
],
"status": "done",
"subtasks": [
{
"id": 1,
"title": "Convert base-provider.js to TypeScript and define IAIProvider interface",
"description": "Create the IAIProvider interface with all required method signatures and convert the existing base-provider.js file to a TypeScript file with proper type definitions",
"dependencies": [],
"details": "Create src/types/providers.ts with IAIProvider interface containing methods: generateCompletion(prompt: string, options?: CompletionOptions): Promise<CompletionResult>, calculateTokens(text: string): number, getName(): string, getModel(): string, getDefaultModel(): string. Move base-provider.js to src/providers/base-provider.ts and add initial TypeScript types.\n<info added on 2025-08-06T12:16:45.893Z>\nSince the IAIProvider interface already exists in src/interfaces/ai-provider.interface.ts with all required methods and type definitions, update the subtask to focus on converting base-provider.js to TypeScript and implementing the BaseAIProvider abstract class. The conversion should extend the existing BaseAIProvider from src/interfaces/ai-provider.interface.ts rather than creating duplicate interfaces. Ensure the implementation aligns with the comprehensive interface that includes AIOptions, AIResponse, AIModel, ProviderInfo types and methods for streaming, validation, and usage tracking.\n</info added on 2025-08-06T12:16:45.893Z>",
"status": "done",
"testStrategy": "Verify that the interface is properly defined and that TypeScript compilation succeeds without errors"
},
{
"id": 2,
"title": "Implement BaseProvider abstract class with core properties",
"description": "Create the abstract BaseProvider class implementing IAIProvider with protected properties for apiKey and model configuration",
"dependencies": [
"118.1"
],
"details": "In base-provider.ts, define abstract class BaseProvider implements IAIProvider with protected properties: apiKey: string, model: string, maxRetries: number = 3, retryDelay: number = 1000. Add constructor that accepts BaseProviderConfig interface with apiKey and optional model. Implement getModel() method to return current model.\n<info added on 2025-08-06T12:28:45.485Z>\nI've reviewed the existing BaseAIProvider interface in the interfaces file. The task requires creating a separate BaseProvider abstract class in base-provider.ts that implements the IAIProvider interface, with specific protected properties and configuration. This appears to be a deliberate architectural decision to have a more concrete base class with built-in retry logic and configuration management that all provider implementations will extend.\n</info added on 2025-08-06T12:28:45.485Z>\n<info added on 2025-08-06T13:14:24.539Z>\nSuccessfully implemented BaseProvider abstract class:\n\nIMPLEMENTED FILES:\n✅ packages/tm-core/src/providers/base-provider.ts - Created new BaseProvider abstract class\n✅ packages/tm-core/src/providers/index.ts - Updated to export BaseProvider\n\nIMPLEMENTATION DETAILS:\n- Created BaseProviderConfig interface with required apiKey and optional model\n- BaseProvider abstract class implements IAIProvider interface\n- Protected properties implemented as specified:\n - apiKey: string \n - model: string\n - maxRetries: number = 3\n - retryDelay: number = 1000\n- Constructor accepts BaseProviderConfig and sets apiKey and model (using getDefaultModel() if not provided)\n- Implemented getModel() method that returns current model\n- All IAIProvider methods declared as abstract (to be implemented by concrete providers)\n- Uses .js extension for ESM import compatibility\n- TypeScript compilation verified successful\n\nThe BaseProvider provides the foundation for concrete provider implementations with shared retry logic properties and standardized configuration.\n</info added on 2025-08-06T13:14:24.539Z>\n<info added on 2025-08-20T17:16:14.037Z>\nREFACTORING REQUIRED: The BaseProvider implementation needs to be relocated from packages/tm-core/src/providers/base-provider.ts to packages/tm-core/src/providers/ai/base-provider.ts following the new directory structure. The class must implement the Template Method pattern with the following structure:\n\n1. Keep constructor concise (under 10 lines) - only initialize apiKey and model properties\n2. Remove maxRetries and retryDelay from constructor - these should be class-level constants or configurable separately\n3. Implement all abstract methods from IAIProvider: generateCompletion, calculateTokens, getName, getModel, getDefaultModel\n4. Add protected template methods for extensibility:\n - validateInput(input: string): void - for input validation with early returns\n - prepareRequest(input: string, options?: any): any - for request preparation\n - handleResponse(response: any): string - for response processing\n - handleError(error: any): never - for consistent error handling\n5. Apply clean code principles: extract complex logic into small focused methods, use early returns to reduce nesting, ensure each method has single responsibility\n\nThe refactored BaseProvider will serve as a robust foundation using Template Method pattern, allowing concrete providers to override specific behaviors while maintaining consistent structure and error handling across all AI provider implementations.\n</info added on 2025-08-20T17:16:14.037Z>\n<info added on 2025-08-21T15:57:30.467Z>\nREFACTORING UPDATE: The BaseProvider implementation in packages/tm-core/src/providers/base-provider.ts is now affected by the core/ folder removal and needs its import paths updated. Since base-provider.ts imports from '../interfaces/provider.interface.js', this import remains valid as both providers/ and interfaces/ are at the same level. No changes needed to BaseProvider imports due to the flattening. The file structure reorganization maintains the relative path relationship between providers/ and interfaces/ directories.\n</info added on 2025-08-21T15:57:30.467Z>",
"status": "done",
"testStrategy": "Create a test file that attempts to instantiate BaseProvider directly (should fail) and verify that protected properties are accessible in child classes"
},
{
"id": 3,
"title": "Define abstract methods and implement Template Method pattern",
"description": "Add all abstract methods to BaseProvider and implement the Template Method pattern for common provider operations",
"dependencies": [
"118.2"
],
"details": "Add abstract methods: protected abstract generateCompletionInternal(prompt: string, options?: CompletionOptions): Promise<CompletionResult>, abstract calculateTokens(text: string): number, abstract getName(): string, abstract getDefaultModel(): string. Implement public generateCompletion() as template method that calls generateCompletionInternal() with error handling and retry logic.\n<info added on 2025-08-20T17:16:38.315Z>\nApply Template Method pattern following clean code principles:\n\nDefine abstract methods:\n- protected abstract generateCompletionInternal(prompt: string, options?: CompletionOptions): Promise<CompletionResult>\n- protected abstract calculateTokens(text: string): number\n- protected abstract getName(): string\n- protected abstract getDefaultModel(): string\n- protected abstract getMaxRetries(): number\n- protected abstract getRetryDelay(): number\n\nImplement template method generateCompletion():\n- Call validateInput() with early returns for invalid prompt/options\n- Call prepareRequest() to format the request\n- Execute generateCompletionInternal() with retry logic\n- Call handleResponse() to process the result\n- Call handleError() in catch blocks\n\nAdd protected helper methods:\n- validateInput(prompt: string, options?: CompletionOptions): ValidationResult - Check prompt length, validate options, return early on errors\n- prepareRequest(prompt: string, options?: CompletionOptions): PreparedRequest - Format prompt, merge with defaults, add metadata\n- handleResponse(result: CompletionResult): ProcessedResult - Validate response format, extract completion text, add usage metrics\n- handleError(error: unknown, attempt: number): void - Log error details, determine if retryable, throw TaskMasterError\n\nExtract retry logic helpers:\n- shouldRetry(error: unknown, attempt: number): boolean - Check error type and attempt count\n- calculateBackoffDelay(attempt: number): number - Use exponential backoff with jitter\n- isRateLimitError(error: unknown): boolean - Detect rate limit responses\n- isTimeoutError(error: unknown): boolean - Detect timeout errors\n\nUse named constants:\n- DEFAULT_MAX_RETRIES = 3\n- BASE_RETRY_DELAY_MS = 1000\n- MAX_RETRY_DELAY_MS = 32000\n- BACKOFF_MULTIPLIER = 2\n- JITTER_FACTOR = 0.1\n\nEnsure each method stays under 30 lines by extracting complex logic into focused helper methods.\n</info added on 2025-08-20T17:16:38.315Z>",
"status": "done",
"testStrategy": "Create MockProvider extending BaseProvider to verify all abstract methods must be implemented and template method properly delegates to internal methods"
},
{
"id": 4,
"title": "Implement error handling and retry logic with exponential backoff",
"description": "Add comprehensive error handling and retry mechanism with exponential backoff for API calls in the template method",
"dependencies": [
"118.3"
],
"details": "In generateCompletion() template method, wrap generateCompletionInternal() in try-catch with retry logic. Implement exponential backoff: delay * Math.pow(2, attempt). Add error types: ProviderError, RateLimitError, AuthenticationError extending Error. Log errors in development mode only. Handle specific error cases like rate limits (429), authentication errors (401), and network timeouts.",
"status": "done",
"testStrategy": "Test retry logic with MockProvider that fails N times then succeeds, verify exponential backoff timing, test different error scenarios and their handling"
},
{
"id": 5,
"title": "Add validation, logging, and completion options handling",
"description": "Implement input validation, debug logging for development, and proper handling of completion options like temperature and max tokens",
"dependencies": [
"118.4"
],
"details": "Add validatePrompt() method to check for empty/invalid prompts. Add validateOptions() to ensure temperature is between 0-2, maxTokens is positive. Implement debug logging using console.log only when NODE_ENV !== 'production'. Create CompletionOptions interface with optional temperature, maxTokens, topP, frequencyPenalty, presencePenalty. Ensure all validation errors throw descriptive ProviderError instances.",
"status": "done",
"testStrategy": "Test validation with invalid inputs (empty prompts, negative maxTokens, temperature > 2), verify logging only occurs in development, test option merging with defaults"
}
]
},
{
"id": 119,
"title": "Implement Provider Factory with Dynamic Imports",
"description": "Create ProviderFactory class using Factory pattern for AI provider instantiation",
"details": "Implement ProviderFactory class in ai/provider-factory.ts with static create method. Use switch statement for provider selection ('anthropic', 'openai', 'google'). Implement dynamic imports for each provider to enable tree-shaking. Return Promise<IAIProvider> from create method. Handle unknown providers with meaningful error messages. Ensure proper typing for configuration object.",
"testStrategy": "Test factory with each provider type, verify dynamic imports work correctly, test error handling for unknown providers, mock dynamic imports for unit testing",
"priority": "medium",
"dependencies": [
118
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create ProviderFactory class structure and types",
"description": "Set up the ProviderFactory class file with proper TypeScript types and interfaces",
"dependencies": [],
"details": "Create ai/provider-factory.ts file. Define ProviderFactory class with static create method signature. Import IAIProvider interface from base provider. Define ProviderType as union type ('anthropic' | 'openai' | 'google'). Set up proper return type as Promise<IAIProvider> for the create method to support dynamic imports.\n<info added on 2025-08-20T17:16:56.506Z>\nClean code architecture implementation: Move to src/providers/ai/provider-factory.ts. Follow Single Responsibility Principle - factory only creates providers, no other responsibilities. Create validateProviderConfig() method for provider configuration validation. Define PROVIDER_NAMES constant object with provider string values. Implement create() method with early returns pattern for better readability. Apply Dependency Inversion - factory depends on IAIProvider interface abstraction, not concrete implementations. Keep method under 40 lines following clean code practices.\n</info added on 2025-08-20T17:16:56.506Z>",
"status": "pending",
"testStrategy": "Verify file structure and type definitions compile correctly"
},
{
"id": 2,
"title": "Implement provider selection logic with switch statement",
"description": "Add the core switch statement logic to handle different provider types",
"dependencies": [
"119.1"
],
"details": "Inside the static create method, implement switch statement on provider type parameter. Add cases for 'anthropic', 'openai', and 'google'. Add default case that throws a descriptive error for unknown providers (e.g., throw new Error(`Unknown provider: ${providerType}`)). Structure each case to prepare for dynamic imports.",
"status": "pending",
"testStrategy": "Test switch statement with valid and invalid provider types, verify error messages"
},
{
"id": 3,
"title": "Add dynamic imports for each provider",
"description": "Implement dynamic import() statements for lazy loading provider modules",
"dependencies": [
"119.2"
],
"details": "In each switch case, use dynamic import() to load the provider module: for 'anthropic' case use await import('./providers/anthropic-provider'), similar for OpenAI and Google providers. Extract the default export or specific class from each dynamic import. This enables tree-shaking by only loading the selected provider.",
"status": "pending",
"testStrategy": "Mock dynamic imports in tests, verify only requested provider is loaded"
},
{
"id": 4,
"title": "Instantiate providers with configuration",
"description": "Create provider instances with proper configuration passing",
"dependencies": [
"119.3"
],
"details": "After each dynamic import, instantiate the provider class with the configuration object passed to create method. Ensure configuration object is properly typed (use IConfiguration or relevant subset). Return the instantiated provider instance. Handle any instantiation errors and wrap them with context about which provider failed.",
"status": "pending",
"testStrategy": "Test provider instantiation with various configuration objects, verify configuration is passed correctly"
},
{
"id": 5,
"title": "Add error handling and validation",
"description": "Implement comprehensive error handling for all failure scenarios",
"dependencies": [
"119.4"
],
"details": "Wrap dynamic imports in try-catch blocks to handle module loading failures. Add validation for configuration object before passing to providers. Create custom error messages that include the provider type and specific failure reason. Consider adding a ProviderFactoryError custom error class. Ensure all errors bubble up properly while maintaining async/await chain.",
"status": "pending",
"testStrategy": "Test various error scenarios: missing provider modules, invalid configurations, network failures during dynamic import"
}
]
},
{
"id": 120,
"title": "Implement Anthropic Provider",
"description": "Create AnthropicProvider class extending BaseProvider with full Anthropic SDK integration",
"details": "Create AnthropicProvider class in ai/providers/anthropic-provider.ts extending BaseProvider. Import and use @anthropic-ai/sdk. Initialize private client property in constructor. Implement all abstract methods: generateCompletion using Claude API, calculateTokens using appropriate tokenizer, getName returning 'anthropic', getModel returning current model, getDefaultModel returning 'claude-3-sonnet-20240229'. Wrap API errors with context.",
"testStrategy": "Mock Anthropic SDK for unit tests, test API error handling, verify token calculation accuracy, test with different model configurations",
"priority": "high",
"dependencies": [
118
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Set up AnthropicProvider class structure and dependencies",
"description": "Create the AnthropicProvider class file with proper imports and class structure extending BaseProvider",
"dependencies": [],
"details": "Create ai/providers/anthropic-provider.ts file. Import BaseProvider from base-provider.ts and import Anthropic from @anthropic-ai/sdk. Import necessary types including IAIProvider, ChatMessage, and ChatCompletion. Set up the class declaration extending BaseProvider with proper TypeScript typing. Add private client property declaration of type Anthropic.\n<info added on 2025-08-20T17:17:15.019Z>\nFile should be created at src/providers/ai/adapters/anthropic-provider.ts instead of ai/providers/anthropic-provider.ts. Follow clean code principles: keep constructor minimal (under 10 lines) with only client initialization. Extract API call logic into separate small methods (each under 20 lines). Use early returns in generateCompletionInternal() for better readability. Extract error mapping logic to a dedicated mapAnthropicError() method. Avoid magic strings - define constants for model names and API parameters.\n</info added on 2025-08-20T17:17:15.019Z>",
"status": "pending",
"testStrategy": "Verify file structure and imports compile without errors, ensure class properly extends BaseProvider"
},
{
"id": 2,
"title": "Implement constructor and client initialization",
"description": "Create the constructor that accepts configuration and initializes the Anthropic SDK client",
"dependencies": [
"120.1"
],
"details": "Implement constructor accepting IConfiguration parameter. Call super(config) to initialize BaseProvider. Initialize the private client property by creating new Anthropic instance with apiKey from config.apiKeys.anthropic. Add validation to ensure API key exists, throwing meaningful error if missing. Store the model configuration from config.model or use default.",
"status": "pending",
"testStrategy": "Test constructor with valid and invalid configurations, verify client initialization, test API key validation"
},
{
"id": 3,
"title": "Implement generateCompletion method with Claude API",
"description": "Implement the main generateCompletion method that calls Anthropic's Claude API and handles responses",
"dependencies": [
"120.2"
],
"details": "Implement async generateCompletion method accepting ChatMessage array. Map ChatMessage format to Anthropic's expected format (role and content). Use client.messages.create() with appropriate parameters including model, max_tokens, and messages. Transform Anthropic response format to ChatCompletion interface. Handle streaming vs non-streaming responses. Implement proper error handling wrapping API errors with context.",
"status": "pending",
"testStrategy": "Mock Anthropic SDK client.messages.create, test with various message formats, verify response transformation, test error scenarios"
},
{
"id": 4,
"title": "Implement token calculation and utility methods",
"description": "Implement calculateTokens method and other required abstract methods from BaseProvider",
"dependencies": [
"120.3"
],
"details": "Implement calculateTokens method using appropriate tokenizer (tiktoken or claude-tokenizer if available). Implement getName method returning 'anthropic' string constant. Implement getModel method returning current model from configuration. Implement getDefaultModel method returning 'claude-3-sonnet-20240229'. Add any additional helper methods for token counting or message formatting.",
"status": "pending",
"testStrategy": "Test token calculation accuracy with various input strings, verify utility methods return correct values"
},
{
"id": 5,
"title": "Add comprehensive error handling and type exports",
"description": "Implement robust error handling throughout the class and ensure proper TypeScript exports",
"dependencies": [
"120.4"
],
"details": "Wrap all Anthropic API calls in try-catch blocks. Create custom error messages that include context about the operation being performed. Handle rate limiting errors specifically. Ensure all methods have proper TypeScript return types. Export the AnthropicProvider class as default export. Add JSDoc comments for all public methods. Ensure proper error propagation maintaining stack traces.",
"status": "pending",
"testStrategy": "Test various API error scenarios, verify error messages include context, test rate limit handling, ensure TypeScript types are correctly exported"
}
]
},
{
"id": 121,
"title": "Create Prompt Builder and Task Parser",
"description": "Implement PromptBuilder class and TaskParser with Dependency Injection",
"details": "Create PromptBuilder class with buildParsePrompt and buildExpandPrompt methods using template literals. Include specific JSON format instructions. Create TaskParser class accepting IAIProvider and IConfiguration via constructor (Dependency Injection). Implement parsePRD method to read PRD file, use PromptBuilder to create prompt, call AI provider, extract tasks from response, and enrich with metadata. Handle parsing errors gracefully.",
"testStrategy": "Unit test prompt building with various inputs, mock AI provider responses, test JSON extraction logic, verify error handling for malformed responses, integration test with real PRD files",
"priority": "high",
"dependencies": [
119,
120
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create PromptBuilder Class Structure",
"description": "Implement the PromptBuilder class with template methods for generating AI prompts",
"dependencies": [],
"details": "Create src/services/prompt-builder.ts. Define PromptBuilder class with two public methods: buildParsePrompt(prdContent: string): string and buildExpandPrompt(task: Task): string. Use template literals to construct prompts with clear JSON format instructions. Include system instructions for AI to follow specific output formats. Add private helper methods for common prompt sections like JSON schema definitions and response format examples.\n<info added on 2025-08-20T17:17:31.467Z>\nRefactor to src/services/prompts/prompt-builder.ts to separate concerns. Implement buildTaskPrompt() method. Define prompt template constants: PARSE_PROMPT_TEMPLATE, EXPAND_PROMPT_TEMPLATE, TASK_PROMPT_TEMPLATE, JSON_FORMAT_INSTRUCTIONS. Move JSON schema definitions and format instructions to constants. Ensure each template uses template literals with ${} placeholders. Keep all methods under 40 lines by extracting logic into focused helper methods. Use descriptive constant names for all repeated strings or instruction blocks.\n</info added on 2025-08-20T17:17:31.467Z>",
"status": "pending",
"testStrategy": "Unit test both prompt methods with sample inputs. Verify prompt contains required JSON structure instructions. Test edge cases like empty PRD content or minimal task objects."
},
{
"id": 2,
"title": "Implement TaskParser Class with DI",
"description": "Create TaskParser class accepting IAIProvider and IConfiguration through constructor injection",
"dependencies": [
"121.1"
],
"details": "Create src/services/task-parser.ts. Define TaskParser class with constructor(private aiProvider: IAIProvider, private config: IConfiguration). Add private promptBuilder property initialized in constructor. Implement basic class structure with placeholder methods. Ensure proper TypeScript typing for all parameters and properties. Follow dependency injection pattern for testability.\n<info added on 2025-08-20T17:17:49.624Z>\nUpdate file location to src/services/tasks/task-parser.ts instead of src/services/task-parser.ts. Refactor parsePRD() method to stay under 40 lines by extracting logic into helper methods: readPRD(), validatePRD(), extractTasksFromResponse(), and enrichTasksWithMetadata(). Each helper method should be under 20 lines. Implement early returns in validation methods for cleaner code flow. Remove any file I/O operations from the parser class - delegate all storage operations to injected dependencies. Ensure clean separation of concerns with parser focused only on task parsing logic.\n</info added on 2025-08-20T17:17:49.624Z>",
"status": "pending",
"testStrategy": "Test constructor properly stores injected dependencies. Verify class instantiation with mock providers. Test TypeScript compilation with proper interface implementations."
},
{
"id": 3,
"title": "Implement parsePRD Method Core Logic",
"description": "Create the main parsePRD method that orchestrates the PRD parsing workflow",
"dependencies": [
"121.2"
],
"details": "Implement parsePRD(filePath: string): Promise<ParsedTask[]> method in TaskParser. Read PRD file using fs.promises.readFile. Use promptBuilder.buildParsePrompt() to create AI prompt. Call aiProvider.generateResponse() with constructed prompt. Extract JSON array from AI response using regex or JSON.parse. Handle potential parsing errors with try-catch blocks. Return empty array on errors after logging.",
"status": "pending",
"testStrategy": "Test with mock AI provider returning valid JSON. Test file reading with various file paths. Mock file system for controlled testing. Verify proper error logging without throwing."
},
{
"id": 4,
"title": "Add Task Enrichment and Metadata",
"description": "Enhance parsed tasks with additional metadata and validation",
"dependencies": [
"121.3"
],
"details": "After extracting tasks from AI response, enrich each task with metadata: add createdAt timestamp, set initial status to 'pending', validate required fields (id, title, description). Add priority field with default 'medium' if not provided. Ensure all tasks have valid structure before returning. Create private enrichTask(task: any): ParsedTask method for this logic. Handle missing or malformed task data gracefully.",
"status": "pending",
"testStrategy": "Test enrichment adds all required metadata. Verify validation catches malformed tasks. Test default values are applied correctly. Ensure timestamps are properly formatted."
},
{
"id": 5,
"title": "Implement Comprehensive Error Handling",
"description": "Add robust error handling throughout the TaskParser implementation",
"dependencies": [
"121.4"
],
"details": "Wrap file reading in try-catch to handle FILE_NOT_FOUND errors. Catch AI provider errors and wrap in appropriate TaskMasterError. Handle JSON parsing errors when extracting from AI response. Add specific error handling for network timeouts, rate limits, and malformed responses. Log errors with context in development mode only. Return meaningful error messages without exposing internals. Ensure all errors are properly typed as TaskMasterError instances.",
"status": "pending",
"testStrategy": "Test each error scenario separately: missing files, AI provider failures, malformed JSON, network errors. Verify proper error codes are used. Test that errors don't expose sensitive information."
}
]
},
{
"id": 122,
"title": "Implement Configuration Management",
"description": "Create ConfigManager class with Zod validation for configuration",
"details": "Implement ConfigManager in config/config-manager.ts accepting Partial<IConfiguration> in constructor. Use Zod to create validation schema matching IConfiguration interface. Implement get method with TypeScript generics for type-safe access, getAll returning full config, validate method for validation. Set defaults: projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true. Handle validation errors with clear messages.",
"testStrategy": "Test with various configuration combinations, verify Zod validation catches invalid configs, test default values, ensure type safety of get method",
"priority": "medium",
"dependencies": [
116
],
"status": "in-progress",
"subtasks": [
{
"id": 1,
"title": "Create Zod validation schema for IConfiguration",
"description": "Define a Zod schema that matches the IConfiguration interface structure with proper validation rules",
"dependencies": [],
"details": "Create configSchema in config/config-manager.ts using z.object() to define validation for all IConfiguration properties. Include string validations for projectPath, enum validation for aiProvider ('anthropic', 'openai', etc.), boolean for enableTags, and any other configuration fields. Use z.string().min(1) for required strings, z.enum() for provider types, and appropriate validators for other fields.\n<info added on 2025-08-06T13:14:58.822Z>\nCompleted Zod validation schema implementation in packages/tm-core/src/config/validation.ts\n\nIMPLEMENTATION DETAILS:\n- Created comprehensive Zod schemas matching IConfiguration interface structure exactly\n- All required schemas exported as expected by config-schema.ts:\n * configurationSchema - Main configuration validation with custom refinements\n * partialConfigurationSchema - For partial updates (using base schema without refinements)\n * modelConfigSchema - Model configuration validation\n * providerConfigSchema - AI provider configuration validation \n * taskSettingsSchema - Task management settings validation\n * loggingSettingsSchema/loggingConfigSchema - Logging configuration (with legacy alias)\n * tagSettingsSchema - Tag management settings validation\n * storageSettingsSchema - Storage configuration validation\n * retrySettingsSchema - Retry/resilience settings validation\n * securitySettingsSchema - Security settings validation\n * cacheConfigSchema - Cache configuration stub (for consistency)\n\nKEY FEATURES:\n- Proper Zod validation rules applied (string lengths, number ranges, enums)\n- Custom refinements for business logic (maxRetryDelay >= retryDelay)\n- Comprehensive enum schemas for all union types\n- Legacy alias support for backwards compatibility\n- All 13 nested interface schemas implemented with appropriate constraints\n- Type exports for runtime validation\n\nVALIDATION INCLUDES:\n- String validations with min lengths for required fields\n- Enum validation for providers, priorities, complexities, log levels, etc.\n- Number range validations (min/max constraints)\n- URL validation for baseUrl fields\n- Array validations with proper item types\n- Record validations for dynamic key-value pairs\n- Optional field handling with appropriate defaults\n\nTESTED AND VERIFIED:\n- All schemas compile correctly with TypeScript\n- Import/export chain works properly through config-schema.ts\n- Basic validation tests pass for key schemas\n- No conflicts with existing IConfiguration interface structure\n</info added on 2025-08-06T13:14:58.822Z>\n<info added on 2025-08-20T17:18:12.343Z>\nCreated ConfigManager class at src/config/config-manager.ts with the following implementation:\n\nSTRUCTURE:\n- DEFAULT_CONFIG constant defined with complete default values for all configuration properties\n- Constructor validates config using validate() method (follows Fail-Fast principle)\n- Constructor kept under 15 lines as required\n- Type-safe get<K>() method using TypeScript generics for accessing specific config properties\n- getAll() method returns complete validated configuration\n- validate() method extracted for configuration validation using Zod schema\n- mergeWithDefaults() helper extracted for merging partial config with defaults\n\nKEY IMPLEMENTATION DETAILS:\n- Imports configurationSchema from src/config/schemas/config.schema.ts\n- Uses z.infer<typeof configurationSchema> for type safety\n- Validates on construction with clear error messages\n- No nested ternaries used\n- Proper error handling with ConfigValidationError\n- Type-safe property access with keyof IConfiguration constraint\n\nMETHODS:\n- constructor(config?: Partial<IConfiguration>) - Validates and stores config\n- get<K extends keyof IConfiguration>(key: K): IConfiguration[K] - Type-safe getter\n- getAll(): IConfiguration - Returns full config\n- private validate(config: unknown): IConfiguration - Validates using Zod\n- private mergeWithDefaults(config: Partial<IConfiguration>): IConfiguration - Merges with defaults\n\nAlso created src/config/schemas/config.schema.ts importing the configurationSchema from validation.ts for cleaner organization.\n</info added on 2025-08-20T17:18:12.343Z>",
"status": "review",
"testStrategy": "Test schema validation with valid and invalid configurations, ensure all IConfiguration fields are covered"
},
{
"id": 2,
"title": "Implement ConfigManager class constructor and storage",
"description": "Create ConfigManager class with constructor that accepts Partial<IConfiguration> and initializes configuration with defaults",
"dependencies": [
"122.1"
],
"details": "Define ConfigManager class with private config property. In constructor, merge provided partial config with defaults (projectPath = process.cwd(), aiProvider = 'anthropic', enableTags = true). Store the merged configuration internally. Ensure the class is properly typed with IConfiguration interface.",
"status": "pending",
"testStrategy": "Test constructor with various partial configs, verify defaults are applied correctly, test with empty config"
},
{
"id": 3,
"title": "Implement validate method with error handling",
"description": "Create validate method that uses Zod schema to validate configuration and provides clear error messages",
"dependencies": [
"122.1",
"122.2"
],
"details": "Implement validate(): void method that runs configSchema.parse(this.config) within try-catch block. On ZodError, transform the error into user-friendly messages that clearly indicate which fields are invalid and why. Consider creating a custom error class for configuration validation errors. The method should throw if validation fails.",
"status": "pending",
"testStrategy": "Test with invalid configs to ensure proper error messages, verify all validation rules work correctly"
},
{
"id": 4,
"title": "Implement type-safe get method with generics",
"description": "Create generic get method for retrieving individual configuration values with TypeScript type inference",
"dependencies": [
"122.2"
],
"details": "Implement get<K extends keyof IConfiguration>(key: K): IConfiguration[K] method that returns the value for a specific configuration key. Use TypeScript generics and keyof operator to ensure type safety. The method should provide proper type inference so consumers get the correct type based on the key they request.",
"status": "pending",
"testStrategy": "Test type inference with different keys, verify TypeScript catches invalid keys at compile time"
},
{
"id": 5,
"title": "Implement getAll method and finalize class",
"description": "Create getAll method to return full configuration and ensure proper exports",
"dependencies": [
"122.2",
"122.3",
"122.4"
],
"details": "Implement getAll(): IConfiguration method that returns a deep copy of the entire configuration object to prevent external mutations. Add JSDoc comments to all public methods. Export the ConfigManager class and ensure it's properly integrated with the module structure. Consider adding a static factory method if needed.",
"status": "pending",
"testStrategy": "Test getAll returns complete config, verify returned object is immutable, test integration with other modules"
}
]
},
{
"id": 123,
"title": "Create Utility Functions and Error Handling",
"description": "Implement ID generation utilities and custom error classes",
"details": "Create id-generator.ts with generateTaskId and generateSubtaskId functions using specified formats with timestamp and random components. Create TaskMasterError class extending Error in errors/task-master-error.ts with error codes (FILE_NOT_FOUND, PARSE_ERROR, etc.). Ensure errors don't expose internal details. Add development-only logging.",
"testStrategy": "Test ID generation for uniqueness and format compliance, verify error classes properly extend Error, test error message formatting, ensure no sensitive data in errors",
"priority": "low",
"dependencies": [
116
],
"status": "in-progress",
"subtasks": [
{
"id": 1,
"title": "Create ID generation utilities module",
"description": "Implement the id-generator.ts module with functions for generating unique task and subtask IDs",
"dependencies": [],
"details": "Create src/utils/id-generator.ts file. Implement generateTaskId() function that returns format 'TASK-{timestamp}-{random}' where timestamp is Date.now() and random is 4-character alphanumeric string. Implement generateSubtaskId(parentId) function that returns format '{parentId}.{sequential}' where sequential increments based on existing subtasks. Use crypto.randomBytes or Math.random for randomness. Export both functions as named exports.\n<info added on 2025-08-06T12:42:22.203Z>\nThe ID generator module has been successfully implemented with the following completed features:\n- generateTaskId() function that creates unique IDs in 'TASK-{timestamp}-{random}' format\n- generateSubtaskId() function that generates sequential subtask IDs in '{parentId}.{sequential}' format\n- Input validation functions to ensure ID integrity\n- Proper TypeScript type definitions and interfaces\n- Comprehensive JSDoc documentation with usage examples\n- All functions exported as named exports from src/utils/id-generator.ts\n</info added on 2025-08-06T12:42:22.203Z>",
"status": "done",
"testStrategy": "Test uniqueness by generating 1000 IDs and checking for duplicates, verify format compliance with regex, test subtask ID sequential numbering"
},
{
"id": 2,
"title": "Create base error class structure",
"description": "Implement the TaskMasterError base class that extends Error with proper error handling capabilities",
"dependencies": [],
"details": "Create src/errors/task-master-error.ts file. Define TaskMasterError class extending Error. Add constructor accepting (message: string, code: string, details?: any). Set this.name = 'TaskMasterError'. Create error code constants: FILE_NOT_FOUND = 'FILE_NOT_FOUND', PARSE_ERROR = 'PARSE_ERROR', VALIDATION_ERROR = 'VALIDATION_ERROR', API_ERROR = 'API_ERROR'. Override toString() to format errors appropriately. Ensure stack trace is preserved.\n<info added on 2025-08-06T13:13:11.635Z>\nCompleted TaskMasterError base class implementation:\n\nIMPLEMENTATION DETAILS:\n- TaskMasterError class fully implemented extending Error\n- Added proper prototype chain fix with Object.setPrototypeOf(this, TaskMasterError.prototype)\n- Includes all required properties: code (from ERROR_CODES), timestamp, context, cause\n- toJSON method implemented for full serialization support\n- Error sanitization implemented via getSanitizedDetails() and containsSensitiveInfo() methods\n- Error chaining with cause property fully supported\n- Additional utility methods: getUserMessage(), toString(), is(), hasCode(), withContext(), wrap()\n\nSUCCESS CRITERIA VERIFIED:\n✅ TaskMasterError class fully implemented\n✅ Extends Error with proper prototype chain fix (Object.setPrototypeOf)\n✅ Includes all required properties and methods\n✅ toJSON method for serialization\n✅ Error sanitization logic for production (containsSensitiveInfo method)\n✅ Comprehensive error context and metadata support\n\nFILE MODIFIED: packages/tm-core/src/errors/task-master-error.ts\n</info added on 2025-08-06T13:13:11.635Z>\n<info added on 2025-08-20T17:18:38.499Z>\nRefactored to follow clean code principles:\n\nCLEAN CODE IMPROVEMENTS:\n- Moved TaskMasterError class to be under 40 lines by extracting methods\n- Created separate error-codes.ts file with ERROR_CODES constant object\n- Extracted sanitizeMessage() method to handle message sanitization\n- Extracted addContext() method for adding error context\n- Extracted toJSON() method for serialization\n- Added static factory methods: fromError(), notFound(), parseError(), validationError(), apiError()\n- Improved error chaining with proper 'cause' property handling\n- Ensured user-friendly messages that hide implementation details\n- Maintained all existing functionality while improving code organization\n\nFILES CREATED/MODIFIED:\n- packages/tm-core/src/errors/error-codes.ts (new file with ERROR_CODES)\n- packages/tm-core/src/errors/task-master-error.ts (refactored to under 40 lines)\n</info added on 2025-08-20T17:18:38.499Z>",
"status": "review",
"testStrategy": "Test that error extends Error properly, verify error.name is set correctly, test toString() output format, ensure stack trace exists"
},
{
"id": 3,
"title": "Implement error sanitization and security features",
"description": "Add security features to prevent exposure of sensitive internal details in error messages",
"dependencies": [
"123.2"
],
"details": "In TaskMasterError class, add private sanitizeDetails() method that removes sensitive data like API keys, file paths beyond project root, and internal state. Implement toJSON() method that returns sanitized error object for external consumption. Add static isSafeForProduction() method to validate error messages don't contain patterns like absolute paths, environment variables, or API credentials. Store original details in private property for debugging.",
"status": "pending",
"testStrategy": "Test sanitization removes absolute paths, API keys, and sensitive patterns, verify toJSON returns safe object, test original details are preserved internally"
},
{
"id": 4,
"title": "Add development-only logging functionality",
"description": "Implement conditional logging that only operates in development environment",
"dependencies": [
"123.2",
"123.3"
],
"details": "In task-master-error.ts, add static enableDevLogging property defaulting to process.env.NODE_ENV !== 'production'. Add logError() method that console.error's full error details only when enableDevLogging is true. Include timestamp, error code, sanitized message, and full stack trace in dev logs. In production, log only error code and safe message. Create static setDevLogging(enabled: boolean) to control logging.",
"status": "pending",
"testStrategy": "Test logging output in dev vs production modes, verify sensitive data isn't logged in production, test log format includes all required fields"
},
{
"id": 5,
"title": "Create specialized error subclasses",
"description": "Implement specific error classes for different error scenarios inheriting from TaskMasterError",
"dependencies": [
"123.2",
"123.3",
"123.4"
],
"details": "Create FileNotFoundError extending TaskMasterError with code FILE_NOT_FOUND, accepting filePath parameter. Create ParseError with code PARSE_ERROR for parsing failures, accepting source and line number. Create ValidationError with code VALIDATION_ERROR for data validation, accepting field and value. Create APIError with code API_ERROR for external API failures, accepting statusCode and provider. Each should format appropriate user-friendly messages while storing technical details internally.",
"status": "pending",
"testStrategy": "Test each error class constructor and message formatting, verify inheritance chain, test that each error type has correct code, ensure specialized errors work with logging system"
}
]
},
{
"id": 124,
"title": "Implement TaskMasterCore Facade",
"description": "Create main TaskMasterCore class as Facade pattern entry point",
"details": "Create TaskMasterCore class in index.ts with private properties for config, storage, aiProvider, and parser. Implement initialize method for lazy loading of AI provider. Implement parsePRD method that coordinates parser, storage, and configuration. Implement getTasks for retrieving stored tasks. Apply Facade pattern to hide complexity. Export createTaskMaster factory function, all types and interfaces. Use proper import paths with .js extensions for ESM.",
"testStrategy": "Integration test full parse flow, test lazy initialization, verify facade properly delegates to subsystems, test with different configurations, ensure exports are correct",
"priority": "high",
"dependencies": [
117,
121,
122
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create TaskMasterCore class structure with type definitions",
"description": "Set up the main TaskMasterCore class in src/index.ts with all necessary imports, type definitions, and class structure following the Facade pattern",
"dependencies": [],
"details": "Create src/index.ts file. Import IConfiguration, ITaskStorage, IAIProvider, and IPRDParser interfaces. Define TaskMasterCore class with private properties: _config (ConfigManager), _storage (ITaskStorage), _aiProvider (IAIProvider | null), _parser (IPRDParser | null). Add constructor accepting options parameter of type Partial<IConfiguration>. Initialize _config with ConfigManager, set other properties to null for lazy loading. Import all necessary types from their respective modules using .js extensions for ESM compatibility.\n<info added on 2025-08-20T17:18:56.625Z>\nApply Facade pattern principles: simple public interface, hide subsystem complexity. Keep all methods under 30 lines by extracting logic. Implement lazy initialization pattern in initialize() method - only create dependencies when first needed. Extract createDependencies() private helper method to handle creation of storage, AI provider, and parser instances. Add createTaskMaster() factory function for convenient instance creation. Use barrel exports pattern - export all public types and interfaces that clients need (IConfiguration, ITaskStorage, IAIProvider, IPRDParser, TaskMasterCore). Follow Interface Segregation Principle - only expose methods and types that clients actually need, hide internal implementation details.\n</info added on 2025-08-20T17:18:56.625Z>",
"status": "pending",
"testStrategy": "Test class instantiation with various configuration options, verify private properties are correctly initialized, ensure TypeScript types are properly enforced"
},
{
"id": 2,
"title": "Implement initialize method for lazy loading",
"description": "Create the initialize method that handles lazy loading of AI provider and parser instances based on configuration",
"dependencies": [
"124.1"
],
"details": "Implement async initialize() method in TaskMasterCore. Check if _aiProvider is null, if so create appropriate provider based on config.aiProvider value using a factory pattern or switch statement. Similarly initialize _parser if null. Store instances in private properties for reuse. Handle provider initialization errors gracefully. Ensure method is idempotent - calling multiple times should not recreate instances. Use dynamic imports if needed for code splitting.",
"status": "pending",
"testStrategy": "Test lazy initialization occurs only once, verify correct provider is instantiated based on config, test error handling for invalid providers, ensure idempotency"
},
{
"id": 3,
"title": "Implement parsePRD method with coordination logic",
"description": "Create parsePRD method that coordinates the parser, AI provider, and storage to parse PRD content and store results",
"dependencies": [
"124.1",
"124.2"
],
"details": "Implement async parsePRD(content: string) method. First call initialize() to ensure components are loaded. Use _parser.parse() to parse the PRD content, passing the AI provider for task generation. Take the parsed tasks and use _storage.saveTasks() to persist them. Handle errors from parser or storage operations. Return the parsed tasks array. Implement proper error context and logging for debugging.",
"status": "pending",
"testStrategy": "Integration test with mock parser and storage, verify coordination between components, test error propagation from subsystems, ensure tasks are properly stored"
},
{
"id": 4,
"title": "Implement getTasks method and other facade methods",
"description": "Create getTasks method and any other necessary facade methods to retrieve and manage tasks",
"dependencies": [
"124.1"
],
"details": "Implement async getTasks() method that calls _storage.loadTasks() and returns the tasks array. Add getTask(id: string) for retrieving single task. Consider adding updateTask, deleteTask methods if needed. All methods should follow facade pattern - simple interface hiding complex operations. Add proper TypeScript return types for all methods. Handle storage not initialized scenarios.",
"status": "pending",
"testStrategy": "Test task retrieval with various scenarios, verify proper delegation to storage, test edge cases like empty task lists or invalid IDs"
},
{
"id": 5,
"title": "Create factory function and module exports",
"description": "Implement createTaskMaster factory function and set up all module exports including types and interfaces",
"dependencies": [
"124.1",
"124.2",
"124.3",
"124.4"
],
"details": "Create createTaskMaster(options?: Partial<IConfiguration>) factory function that returns a new TaskMasterCore instance. Export this as the primary entry point. Re-export all types and interfaces from submodules: ITask, IConfiguration, IAIProvider, ITaskStorage, IPRDParser, etc. Use 'export type' for type-only exports. Ensure all imports use .js extensions for ESM. Create index.d.ts if needed for better TypeScript support. Add JSDoc comments for public API.",
"status": "pending",
"testStrategy": "Test factory function creates proper instances, verify all exports are accessible, test TypeScript type inference works correctly, ensure ESM imports resolve properly"
}
]
},
{
"id": 125,
"title": "Create Placeholder Providers and Complete Testing",
"description": "Implement placeholder providers for OpenAI and Google, create comprehensive test suite",
"details": "Create OpenAIProvider and GoogleProvider classes extending BaseProvider, throwing 'not yet implemented' errors. Create MockProvider in tests/mocks for testing without API calls. Write unit tests for TaskParser, integration tests for parse-prd flow, ensure 80% code coverage. Follow kebab-case naming for test files. Test error scenarios comprehensively.",
"testStrategy": "Run full test suite with coverage report, verify all edge cases are tested, ensure mock provider behaves like real providers, test both success and failure paths",
"priority": "medium",
"dependencies": [
120,
124
],
"status": "pending",
"subtasks": [
{
"id": 1,
"title": "Create OpenAIProvider placeholder class",
"description": "Implement OpenAIProvider class that extends BaseProvider with all required methods throwing 'not yet implemented' errors",
"dependencies": [],
"details": "Create src/providers/openai-provider.ts file. Import BaseProvider from base-provider.ts. Implement class OpenAIProvider extends BaseProvider. Override parseText() method to throw new Error('OpenAI provider not yet implemented'). Add proper TypeScript types and JSDoc comments. Export the class as default.",
"status": "pending",
"testStrategy": "Write unit test to verify OpenAIProvider extends BaseProvider correctly and throws expected error when parseText is called"
},
{
"id": 2,
"title": "Create GoogleProvider placeholder class",
"description": "Implement GoogleProvider class that extends BaseProvider with all required methods throwing 'not yet implemented' errors",
"dependencies": [],
"details": "Create src/providers/google-provider.ts file. Import BaseProvider from base-provider.ts. Implement class GoogleProvider extends BaseProvider. Override parseText() method to throw new Error('Google provider not yet implemented'). Add proper TypeScript types and JSDoc comments. Export the class as default.",
"status": "pending",
"testStrategy": "Write unit test to verify GoogleProvider extends BaseProvider correctly and throws expected error when parseText is called"
},
{
"id": 3,
"title": "Create MockProvider for testing",
"description": "Implement MockProvider class in tests/mocks directory that simulates provider behavior without making actual API calls",
"dependencies": [],
"details": "Create tests/mocks/mock-provider.ts file. Extend BaseProvider class. Implement parseText() to return predefined mock task data based on input. Add methods to configure mock responses, simulate errors, and track method calls. Include delay simulation for realistic testing. Export class and helper functions for test setup.",
"status": "pending",
"testStrategy": "Test MockProvider returns consistent mock data, can simulate different scenarios (success/failure), and properly tracks method invocations"
},
{
"id": 4,
"title": "Write unit tests for TaskParser",
"description": "Create comprehensive unit tests for TaskParser class covering all methods and edge cases",
"dependencies": [
"125.3"
],
"details": "Create tests/unit/task-parser.test.ts file. Test TaskParser constructor with different providers. Test parseFromText method with valid/invalid inputs. Test error handling for malformed responses. Use MockProvider to simulate API responses. Test task ID generation and structure validation. Ensure all public methods are covered.",
"status": "pending",
"testStrategy": "Achieve 100% code coverage for TaskParser class, test both success and failure paths, verify error messages are appropriate"
},
{
"id": 5,
"title": "Write integration tests for parse-prd flow",
"description": "Create end-to-end integration tests for the complete PRD parsing workflow",
"dependencies": [
"125.3",
"125.4"
],
"details": "Create tests/integration/parse-prd-flow.test.ts file. Test full flow from PRD input to task output. Test with MockProvider simulating successful parsing. Test error scenarios (file not found, parse errors, network failures). Test task dependency resolution. Verify output format matches expected structure. Test with different PRD formats and sizes.",
"status": "pending",
"testStrategy": "Run coverage report to ensure 80% overall coverage, verify all critical paths are tested, ensure tests are deterministic and don't depend on external services"
}
]
}
],
"metadata": {
"created": "2025-08-06T08:51:19.649Z",
"updated": "2025-08-20T21:32:21.837Z",
"description": "Tasks for tm-core-phase-1 context"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

⚠️ Potential issue

Do not manually edit Taskmaster-generated files — revert this hunk and regenerate via Taskmaster commands.

Per repository guidelines, .taskmaster/tasks/tasks.json must be modified only through Taskmaster CLI/automation. Hand-editing risks schema drift and tooling breakage. Please revert these changes and reapply them using Taskmaster commands (e.g., add-tag/add-task/update-subtask/import flows).

I can provide an exact sequence of Taskmaster CLI commands to recreate “tm-core-phase-1” with its tasks/subtasks if you confirm the desired fields.

Comment on lines +7304 to +7305
"id": 115,
"title": "Initialize tm-core Package Structure",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Potential ID type inconsistency with core types (Task.id likely string).

Core type definitions indicate Task.id should be a string (e.g., "TASK-..."), while these entries use numeric IDs. If the codebase has migrated to string IDs (see id-generator utilities), these numeric IDs will be inconsistent.

Run to assess current repo usage and decide whether to migrate these to string IDs:

If the project standard is string IDs, regenerate this tag’s tasks via Taskmaster so IDs follow the expected format (don’t hand-convert).

Also applies to: 7373-7375, 7441-7444, 7508-7511, 7575-7577, 7642-7645, 7709-7712, 7777-7780, 7847-7850, 7915-7918, 7988-7991


🏁 Script executed:

#!/bin/bash
# Inspect current IDs in tasks.json across tags
jq -r '
  to_entries[] | select(.key!="metadata")
  | .key as $tag
  | .value.tasks[] | [$tag, ( .id|type ), .id] | @tsv
' .taskmaster/tasks/tasks.json | column -t

Length of output: 264


🏁 Script executed:

#!/bin/bash
set -euo pipefail

FILE=".taskmaster/tasks/tasks.json"
if [[ ! -f "$FILE" ]]; then
  echo "ERROR: $FILE not found"
  exit 0
fi

python3 - <<'PY'
import json,os,re,sys

file = ".taskmaster/tasks/tasks.json"
with open(file, "r", encoding="utf-8") as f:
    data = json.load(f)

total_tasks = 0
total_counts = {"number":0,"string":0,"other":0}
per_tag = {}
numeric_samples = []

for tag, content in data.items():
    if tag == "metadata":
        continue
    tasks = content.get("tasks", []) or []
    counts = {"number":0,"string":0,"other":0}
    for t in tasks:
        total_tasks += 1
        tid = t.get("id", None)
        if isinstance(tid, int):
            typ = "number"
        elif isinstance(tid, str):
            typ = "string"
        else:
            typ = "other"
        counts[typ] += 1
        total_counts[typ] += 1
        if typ == "number" and len(numeric_samples) < 50:
            numeric_samples.append((tag, tid, t.get("title","")[:80]))
    per_tag[tag] = counts

print("PER-TAG ID COUNTS (tag, string, number, other):")
for tag, c in per_tag.items():
    print(f"{tag:36} {c['string']:6} {c['number']:6} {c['other']:6}")

print("\nTOTAL TASKS:", total_tasks)
print("TOTAL by id type: string={string} number={number} other={other}".format(**total_counts))

if numeric_samples:
    print("\nEXAMPLES of numeric IDs (up to 50):")
    for tag, tid, title in numeric_samples:
        print(f"{tag:36} {tid:<12} {title}")

# Search repository for Task type defs and id utilities
patterns = [
    ("interface Task", re.compile(r'\binterface\s+Task\b')),
    ("type Task", re.compile(r'\btype\s+Task\b')),
    ("class Task", re.compile(r'\bclass\s+Task\b')),
    ("id-generator/idGenerator/generateId", re.compile(r'id-generator|idGenerator|generateId')),
    ("TASK-", re.compile(r'TASK-')),
]

print("\nSEARCHING repo for Task type defs and id utilities (file:line:preview) -- limited to 200 hits")
hits=0
max_hits=200
exclude_dirs = {".git","node_modules","dist","build","venv",".venv","__pycache__"}
for root, dirs, files in os.walk("."):
    dirs[:] = [d for d in dirs if d not in exclude_dirs]
    for fname in files:
        if hits>=max_hits:
            break
        path = os.path.join(root, fname)
        # skip binary-like by extension
        if any(path.startswith(p) for p in ["./.git", "./node_modules", "./dist", "./build"]):
            continue
        try:
            with open(path, "r", encoding="utf-8", errors="ignore") as fh:
                for i,line in enumerate(fh,1):
                    for label,pat in patterns:
                        if pat.search(line):
                            preview = line.strip()
                            print(f"{path}:{i}:{label}: {preview[:200]}")
                            hits += 1
                            break
                    if hits>=max_hits:
                        break
        except Exception:
            continue
    if hits>=max_hits:
        break

if hits==0:
    print("No matches found for Task type defs or id utilities.")
PY

Length of output: 59


🏁 Script executed:

#!/bin/sh
# Safer, portable verification script for Task.id types and Task type/id utilities.
FILE=".taskmaster/tasks/tasks.json"
if [ ! -f "$FILE" ]; then
  echo "ERROR: $FILE not found"
  exit 0
fi

python3 - <<'PY'
import json,os,re,sys

file = ".taskmaster/tasks/tasks.json"
try:
    with open(file, "r", encoding="utf-8") as f:
        data = json.load(f)
except Exception as e:
    print("ERROR reading JSON:", e)
    sys.exit(0)

total_tasks = 0
total_counts = {"number":0,"string":0,"other":0}
per_tag = {}
numeric_samples = []

for tag, content in data.items():
    if tag == "metadata":
        continue
    tasks = content.get("tasks") or []
    counts = {"number":0,"string":0,"other":0}
    for t in tasks:
        total_tasks += 1
        tid = t.get("id", None)
        if isinstance(tid, bool) or tid is None:
            typ = "other"
        elif isinstance(tid, int):
            typ = "number"
        elif isinstance(tid, str):
            typ = "string"
        else:
            typ = "other"
        counts[typ] += 1
        total_counts[typ] += 1
        if typ == "number" and len(numeric_samples) < 50:
            numeric_samples.append((tag, tid, t.get("title","")[:120]))
    per_tag[tag] = counts

print("PER-TAG ID COUNTS (tag, string, number, other):")
for tag, c in sorted(per_tag.items()):
    print(f"{tag:60} {c['string']:6} {c['number']:6} {c['other']:6}")

print("\nTOTAL TASKS:", total_tasks)
print("TOTAL by id type: string={string} number={number} other={other}".format(**total_counts))

if numeric_samples:
    print("\nEXAMPLES of numeric IDs (up to 50):")
    for tag, tid, title in numeric_samples:
        print(f"{tag:60} {str(tid):<12} {title}")

# Search repo for Task type definitions and id utilities
patterns = [
    ("interface Task", re.compile(r'\binterface\s+Task\b')),
    ("export interface Task", re.compile(r'\bexport\s+interface\s+Task\b')),
    ("type Task", re.compile(r'\btype\s+Task\b')),
    ("class Task", re.compile(r'\bclass\s+Task\b')),
    ("Task.id", re.compile(r'\bTask\b.*\b(id)\b')),
    ("id: string (TS)", re.compile(r'\b(id)\s*:\s*string\b')),
    ("id-generator", re.compile(r'id[-_]?generator|generateId|generate-id|idGenerator')),
    ("TASK-", re.compile(r'TASK-')),
]

print("\nSEARCH HITS (file:line:label:preview) -- limited to 200:")
hits = 0
max_hits = 200
exclude_dirs = {".git","node_modules","dist","build","venv",".venv","__pycache__"}
for root, dirs, files in os.walk("."):
    dirs[:] = [d for d in dirs if d not in exclude_dirs]
    for fname in files:
        if hits >= max_hits:
            break
        path = os.path.join(root, fname)
        # skip some large/binary-looking files by extension
        if any(path.lower().endswith(ext) for ext in ('.png','.jpg','.jpeg','.gz','.zip','.exe','.dll','.so','.pdf')):
            continue
        try:
            with open(path, "r", encoding="utf-8", errors="ignore") as fh:
                for i,line in enumerate(fh,1):
                    for label,pat in patterns:
                        if pat.search(line):
                            preview = line.strip()
                            print(f"{path}:{i}:{label}: {preview[:300]}")
                            hits += 1
                            break
                    if hits >= max_hits:
                        break
        except Exception:
            continue
    if hits >= max_hits:
        break

if hits == 0:
    print("No matches found for Task type defs or id utilities.")
PY

Length of output: 29895


Replace numeric IDs in .taskmaster/tasks/tasks.json with canonical string IDs from the id-generator.

115 tasks currently use numeric IDs (master: 93, cc-kiro-hooks: 10, tm-core-phase-1: 11, test-tag: 1) while core types and utilities expect string IDs (see packages/tm-core/src/types/index.ts and packages/tm-core/src/utils/id-generator.ts). Regenerate the affected tag(s) via Taskmaster so IDs are produced by the id-generator (do not hand-convert). Inspect: .taskmaster/tasks/tasks.json; packages/tm-core/src/utils/id-generator.ts; packages/tm-core/src/types/index.ts; packages/tm-core/src/entities/task.entity.ts.

🤖 Prompt for AI Agents
In .taskmaster/tasks/tasks.json around lines 7304-7305 there are numeric task
IDs that must be replaced with canonical string IDs produced by the project's
id-generator; regenerate the affected task groups/tags via the Taskmaster
tooling (do not hand-convert numeric IDs), ensuring the new IDs match the format
in packages/tm-core/src/utils/id-generator.ts and conform to the types in
packages/tm-core/src/types/index.ts and
packages/tm-core/src/entities/task.entity.ts; run Taskmaster to produce and
commit the updated tasks.json so all previously numeric IDs (master,
cc-kiro-hooks, tm-core-phase-1, test-tag) are replaced with generated string
IDs.

"json.format.enable": true,
"json.validate.enable": true
"json.validate.enable": true,
"typescript.tsdk": "node_modules/typescript/lib",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Harden TS SDK path for multi-root/monorepo.

Use workspace variable to avoid breakage when opening a subfolder or using alternate root.

Apply this diff:

-  "typescript.tsdk": "node_modules/typescript/lib",
+  "typescript.tsdk": "${workspaceFolder}/node_modules/typescript/lib",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"typescript.tsdk": "node_modules/typescript/lib",
"typescript.tsdk": "${workspaceFolder}/node_modules/typescript/lib",
🤖 Prompt for AI Agents
In .vscode/settings.json around line 14, the TypeScript SDK path is hardcoded to
"node_modules/typescript/lib"; change it to use the workspace variable so it
works in multi-root/monorepo setups by replacing the value with
"${workspaceFolder}/node_modules/typescript/lib" (keep JSON quoting intact) so
the editor resolves the TS SDK relative to the opened workspace folder.

Comment on lines +15 to +26
"[typescript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Biome as default formatter: good; add missing React/JSONC language scopes.

Add JavaScript React and JSONC so tsx/jsx and tsconfig-like files are consistently formatted by Biome.

Apply this diff:

   "[typescript]": {
     "editor.defaultFormatter": "biomejs.biome"
   },
   "[typescriptreact]": {
     "editor.defaultFormatter": "biomejs.biome"
   },
   "[javascript]": {
     "editor.defaultFormatter": "biomejs.biome"
   },
+  "[javascriptreact]": {
+    "editor.defaultFormatter": "biomejs.biome"
+  },
   "[json]": {
     "editor.defaultFormatter": "biomejs.biome"
-  }
+  },
+  "[jsonc]": {
+    "editor.defaultFormatter": "biomejs.biome"
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"[typescript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
}
"[typescript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascript]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[javascriptreact]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[json]": {
"editor.defaultFormatter": "biomejs.biome"
},
"[jsonc]": {
"editor.defaultFormatter": "biomejs.biome"
}
🤖 Prompt for AI Agents
.vscode/settings.json around lines 15 to 26: the workspace sets Biome as the
default formatter for typescript, typescriptreact, javascript and json but is
missing the JavaScript React and JSONC scopes; add entries for
"[javascriptreact]" and "[jsonc]" with "editor.defaultFormatter":
"biomejs.biome" so .jsx/.tsx and tsconfig-like JSONC files are formatted
consistently by Biome.

Comment on lines +64 to +67
export function getAssetPath(relativePath) {
const assetsDir = getAssetsDir();
return path.join(assetsDir, relativePath);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Prevent path traversal outside assets.

Use path.resolve and enforce that resolved paths stay within the assets dir.

-export function getAssetPath(relativePath) {
-  const assetsDir = getAssetsDir();
-  return path.join(assetsDir, relativePath);
-}
+export function getAssetPath(relativePath) {
+  const assetsDir = getAssetsDir();
+  const full = path.resolve(assetsDir, relativePath);
+  if (!full.startsWith(assetsDir + path.sep)) {
+    throw new Error(`Refusing to access outside assets: ${relativePath}`);
+  }
+  return full;
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function getAssetPath(relativePath) {
const assetsDir = getAssetsDir();
return path.join(assetsDir, relativePath);
}
export function getAssetPath(relativePath) {
const assetsDir = getAssetsDir();
const full = path.resolve(assetsDir, relativePath);
if (!full.startsWith(assetsDir + path.sep)) {
throw new Error(`Refusing to access outside assets: ${relativePath}`);
}
return full;
}
🤖 Prompt for AI Agents
In src/utils/asset-resolver.js around lines 64 to 67, the function returns a
joined path allowing path traversal outside the assets directory; resolve the
assetsDir and the candidate with path.resolve, then verify the resolved
candidate is within the resolved assetsDir (e.g. compute
path.relative(resolvedAssetsDir, resolvedCandidate) and throw an error if it
starts with '..' or is absolute outside), and only return the resolvedCandidate
when the check passes to prevent escaping the assets directory.

Comment on lines +89 to +92
export function readAsset(relativePath, encoding = 'utf8') {
const assetPath = getAssetPath(relativePath);
return fs.readFileSync(assetPath, encoding);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Minor: support Buffer reads cleanly and surface clearer errors.

Return Buffer when encoding is falsy; wrap read errors for context.

-export function readAsset(relativePath, encoding = 'utf8') {
-  const assetPath = getAssetPath(relativePath);
-  return fs.readFileSync(assetPath, encoding);
-}
+export function readAsset(relativePath, encoding = 'utf8') {
+  const assetPath = getAssetPath(relativePath);
+  try {
+    return encoding ? fs.readFileSync(assetPath, encoding) : fs.readFileSync(assetPath);
+  } catch (e) {
+    throw new Error(`Failed to read asset "${relativePath}" at ${assetPath}: ${e.message}`);
+  }
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function readAsset(relativePath, encoding = 'utf8') {
const assetPath = getAssetPath(relativePath);
return fs.readFileSync(assetPath, encoding);
}
export function readAsset(relativePath, encoding = 'utf8') {
const assetPath = getAssetPath(relativePath);
try {
return encoding ? fs.readFileSync(assetPath, encoding) : fs.readFileSync(assetPath);
} catch (e) {
throw new Error(`Failed to read asset "${relativePath}" at ${assetPath}: ${e.message}`);
}
}
🤖 Prompt for AI Agents
In src/utils/asset-resolver.js around lines 89 to 92, the readAsset function
always reads as UTF-8 and doesn't handle read errors with context; change it to
return a Buffer when encoding is falsy (pass null to fs.readFileSync) and wrap
the fs.readFileSync call in a try/catch that throws a new error including the
assetPath and original error (use the Error cause if available) so callers get
clear context on failures.

Comment on lines +212 to 214
const assetsDir = getAssetsDir();
profile.onAddRulesProfile(projectRoot, assetsDir);
log(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid repeated FS scans for assetsDir (cache once).

getAssetsDir() likely hits the filesystem; cache it per invocation and pass to hooks to reduce overhead.

-  if (typeof profile.onAddRulesProfile === 'function') {
+  const assetsDir = (() => {
+    try { return getAssetsDir(); } catch { return null; }
+  })();
+  if (typeof profile.onAddRulesProfile === 'function' && assetsDir) {
-    const assetsDir = getAssetsDir();
     profile.onAddRulesProfile(projectRoot, assetsDir);
@@
-  if (typeof profile.onPostConvertRulesProfile === 'function') {
-    try {
-      const assetsDir = getAssetsDir();
+  if (typeof profile.onPostConvertRulesProfile === 'function' && assetsDir) {
+    try {
       profile.onPostConvertRulesProfile(projectRoot, assetsDir);

Also applies to: 307-309

🤖 Prompt for AI Agents
In src/utils/rule-transformer.js around lines 212-214 and 307-309,
getAssetsDir() is being called multiple times causing repeated filesystem scans;
cache the result once in a local variable (e.g., const assetsDir =
getAssetsDir()) in the surrounding function scope and replace the duplicate
getAssetsDir() calls with that variable when calling profile.onAddRulesProfile
and any other hook invocations to reuse the same path and avoid repeated FS
hits.

Comment on lines +10 to 17
test('package.json includes dist/** in the "files" array for bundled files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));

// Check if assets/** is included in the files array (which contains Roo files)
expect(packageJson.files).toContain('assets/**');
// Check if dist/** is included in the files array (which contains bundled output including Roo files)
expect(packageJson.files).toContain('dist/**');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Guard against missing "files" in package.json to avoid TypeError.

Add an existence check before toContain for clearer failures.

-    // Check if dist/** is included in the files array (which contains bundled output including Roo files)
-    expect(packageJson.files).toContain('dist/**');
+    // Check if dist/** is included in the files array (which contains bundled output including Roo files)
+    expect(Array.isArray(packageJson.files)).toBe(true);
+    expect(packageJson.files).toContain('dist/**');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('package.json includes dist/** in the "files" array for bundled files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
// Check if assets/** is included in the files array (which contains Roo files)
expect(packageJson.files).toContain('assets/**');
// Check if dist/** is included in the files array (which contains bundled output including Roo files)
expect(packageJson.files).toContain('dist/**');
});
test('package.json includes dist/** in the "files" array for bundled files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
// Check if dist/** is included in the files array (which contains bundled output including Roo files)
expect(Array.isArray(packageJson.files)).toBe(true);
expect(packageJson.files).toContain('dist/**');
});
🤖 Prompt for AI Agents
In tests/integration/profiles/roo-files-inclusion.test.js around lines 10 to 17,
the test assumes packageJson.files exists and will throw a TypeError if it's
missing; update the test to first assert that packageJson.files is defined and
an array (e.g., using expect(Array.isArray(packageJson.files)).toBe(true) or
expect(packageJson).toHaveProperty('files')), then proceed to check that it
contains 'dist/**' so failures are clear and do not raise an exception.

Comment on lines +10 to 17
test('package.json includes dist/** in the "files" array for bundled files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));

// Check if assets/** is included in the files array (which contains rules files)
expect(packageJson.files).toContain('assets/**');
// Check if dist/** is included in the files array (which contains bundled output including assets)
expect(packageJson.files).toContain('dist/**');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Guard against missing "files" in package.json to avoid TypeError.

Mirror the defensive check here as well.

-    // Check if dist/** is included in the files array (which contains bundled output including assets)
-    expect(packageJson.files).toContain('dist/**');
+    // Check if dist/** is included in the files array (which contains bundled output including assets)
+    expect(Array.isArray(packageJson.files)).toBe(true);
+    expect(packageJson.files).toContain('dist/**');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('package.json includes dist/** in the "files" array for bundled files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
// Check if assets/** is included in the files array (which contains rules files)
expect(packageJson.files).toContain('assets/**');
// Check if dist/** is included in the files array (which contains bundled output including assets)
expect(packageJson.files).toContain('dist/**');
});
test('package.json includes dist/** in the "files" array for bundled files', () => {
// Read the package.json file
const packageJsonPath = path.join(process.cwd(), 'package.json');
const packageJson = JSON.parse(fs.readFileSync(packageJsonPath, 'utf8'));
// Check if dist/** is included in the files array (which contains bundled output including assets)
expect(Array.isArray(packageJson.files)).toBe(true);
expect(packageJson.files).toContain('dist/**');
});
🤖 Prompt for AI Agents
In tests/integration/profiles/rules-files-inclusion.test.js around lines 10 to
17, the test assumes packageJson.files exists and will throw a TypeError when
it's missing; update the test to defensively check that packageJson.files is
present and is an array (e.g., assert Array.isArray(packageJson.files) with a
clear failure message) before asserting it contains 'dist/**', or fall back to
treating missing files as an empty array and assert accordingly so the test
fails cleanly instead of throwing.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

{
"currentTag": "master",
"lastSwitched": "2025-08-01T14:09:25.838Z",
"lastSwitched": "2025-08-27T21:03:20.550Z",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

⚠️ Potential issue

Do not commit .taskmaster/state.json; revert this change and exclude from PR.

Per project guidelines, this file is system-managed. Please drop it from the PR and ignore future changes (e.g., add to .gitignore or mark assume-unchanged).

Want a quick git snippet to remove it from the PR and prevent future diffs?

🤖 Prompt for AI Agents
.taskmaster/state.json around line 3: this system-managed file was accidentally
committed; revert the change and remove it from the PR, then prevent future
diffs by adding .taskmaster/state.json to the repository .gitignore (or marking
the file assume-unchanged locally) and amend the commit/branch so the file is no
longer part of the changeset before merging.

Comment on lines 13 to 37
"scripts": {
"build": "npm run build:packages && tsup",
"dev": "npm run build:packages && npm link && (npm run dev:packages & tsup --watch --onSuccess 'echo Build complete && npm link')",
"dev:packages": "(cd packages/tm-core && npm run dev) & (cd packages/workflow-engine && npm run dev) & (cd apps/cli && npm run dev) & wait",
"dev:core": "cd packages/tm-core && npm run dev",
"dev:workflow": "cd packages/workflow-engine && npm run dev",
"dev:cli": "cd apps/cli && npm run dev",
"build:packages": "npm run build:core && npm run build:workflow && npm run build:cli",
"build:core": "cd packages/tm-core && npm run build",
"build:workflow": "cd packages/workflow-engine && npm run build",
"build:cli": "cd apps/cli && npm run build",
"test": "node --experimental-vm-modules node_modules/.bin/jest",
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
"test:e2e": "./tests/e2e/run_e2e.sh",
"test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
"prepare": "chmod +x bin/task-master.js mcp-server/server.js",
"postpack": "chmod +x dist/task-master.js dist/mcp-server.js",
"changeset": "changeset",
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
"mcp-server": "node mcp-server/server.js",
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
"format": "biome format . --write"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add required test scripts per repo guidelines

Root package.json should expose test:unit, test:integration, and test:ci in addition to existing scripts.

   "scripts": {
@@
-    "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
+    "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
+    "test:unit": "node --experimental-vm-modules node_modules/.bin/jest --coverage --selectProjects unit || node --experimental-vm-modules node_modules/.bin/jest --coverage --testPathPattern=tests/unit|src/.+\\.test\\.ts$",
+    "test:integration": "node --experimental-vm-modules node_modules/.bin/jest --coverage --selectProjects integration || node --experimental-vm-modules node_modules/.bin/jest --coverage --testPathPattern=tests/integration",
     "test:e2e": "./tests/e2e/run_e2e.sh",
@@
-    "format": "biome format . --write"
+    "format": "biome format . --write",
+    "test:ci": "npm run build:packages && npm run test:coverage"
   },

If you have Jest projects configured, the selectProjects branches will be used; otherwise the fallback patterns apply.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
package.json lines 13-37: add three new npm scripts — test:unit,
test:integration, and test:ci — alongside the existing test script; implement
each to invoke Jest the same way the current "test" script does (node
--experimental-vm-modules node_modules/.bin/jest) but scoped: test:unit should
run unit projects using --selectProjects <unit-project-names> or a fallback
pattern like "--testPathPattern=tests/unit|__tests__/unit", test:integration
should run integration projects using --selectProjects
<integration-project-names> or fallback pattern like
"--testPathPattern=tests/integration|__tests__/integration", and test:ci should
run the CI suite (combine unit+integration or use --selectProjects for CI
project names) and include any CI-specific flags (e.g., --runInBand or --ci);
add these entries to the "scripts" object so callers can run npm run test:unit,
npm run test:integration, and npm run test:ci.

Comment on lines +30 to 35
"postpack": "chmod +x dist/task-master.js dist/mcp-server.js",
"changeset": "changeset",
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
"mcp-server": "node mcp-server/server.js",
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

postpack chmod is fragile on Windows

chmod will fail on Windows shells. Consider doing this at build time (tsup banner/shebang) or guarding by OS.

-    "postpack": "chmod +x dist/task-master.js dist/mcp-server.js",
+    "postpack": "node -e \"const os=require('os');if(os.platform()!=='win32'){require('child_process').execSync('chmod +x dist/task-master.js dist/mcp-server.js')}\"",

Or add shebang via tsup config and skip chmod entirely.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"postpack": "chmod +x dist/task-master.js dist/mcp-server.js",
"changeset": "changeset",
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
"mcp-server": "node mcp-server/server.js",
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
"postpack": "node -e \"const os=require('os');if(os.platform()!=='win32'){require('child_process').execSync('chmod +x dist/task-master.js dist/mcp-server.js')}\"",
"changeset": "changeset",
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
🤖 Prompt for AI Agents
In package.json around lines 30-35 the "postpack" script uses chmod which fails
on Windows; remove or replace this fragile postpack step and instead add a
shebang via the tsup build (set banner/shebang so dist/*.js are executable) or
conditionally perform chmod only on POSIX platforms (e.g., move the chmod into a
small Node script invoked from postpack that checks process.platform and runs
chmod only on non-windows), then update package.json to remove the unconditional
chmod call.

Comment on lines +1 to +7
{
"name": "@tm/core",
"version": "1.0.0",
"description": "Core library for Task Master - TypeScript task management system",
"type": "module",
"types": "./dist/index.d.ts",
"main": "./dist/index.js",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mark internal package as private to prevent accidental publish.

Given this is an internal monorepo package (not an SDK), safeguard against accidental npm publish.

 {
   "name": "@tm/core",
   "version": "1.0.0",
+  "private": true,
   "description": "Core library for Task Master - TypeScript task management system",
   "type": "module",
-  "types": "./dist/index.d.ts",
+  "types": "./src/index.ts",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
"name": "@tm/core",
"version": "1.0.0",
"description": "Core library for Task Master - TypeScript task management system",
"type": "module",
"types": "./dist/index.d.ts",
"main": "./dist/index.js",
{
"name": "@tm/core",
"version": "1.0.0",
"private": true,
"description": "Core library for Task Master - TypeScript task management system",
"type": "module",
"types": "./src/index.ts",
"main": "./dist/index.js",
🤖 Prompt for AI Agents
In packages/tm-core/package.json around lines 1 to 7 the package is not marked
private; add a top-level "private": true property (boolean true) to the
package.json to prevent accidental npm publish, keep it at the root of the JSON
object alongside name/version/type, and update any CI/lint expectations if they
validate package privacy.

"version": "1.0.0",
"description": "Core library for Task Master - TypeScript task management system",
"type": "module",
"types": "./dist/index.d.ts",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Align root "types" with src to match team DX preference.

Subpath exports already point "types" to ./src/*. For consistency and DX (per team preference), point root "types" to ./src/index.ts.

-"types": "./dist/index.d.ts",
+"types": "./src/index.ts",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"types": "./dist/index.d.ts",
"types": "./src/index.ts",
🤖 Prompt for AI Agents
In packages/tm-core/package.json around line 6, the root "types" field points to
"./dist/index.d.ts" but the project's subpath exports already reference
"./src/*"; update the root "types" to point to "./src/index.ts" to align with
team DX. Edit the package.json to replace the current types value with
"./src/index.ts", save, and run a quick build/typecheck locally to ensure no
path mismatches.

* Public exports for task execution operations
*/

export * from './task-execution-manager.js'; No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Barrel export ok; verify './task-execution' is exported from package.json with types.

Optional: switch to explicit named exports to keep API tight.

🤖 Prompt for AI Agents
In packages/workflow-engine/src/task-execution/index.ts around line 6, the
barrel export re-exports everything from './task-execution-manager.js' but the
package's public API may not expose './task-execution' and its types in
package.json; update package.json to include an export entry for
"./task-execution" (pointing to the built JS and the .d.ts types) or add it
under the "types" field so consumers get correct typings, and optionally replace
the star export with explicit named exports to tighten the public API (export
only the symbols you intend to expose).

Comment on lines +37 to +44
startedAt: Date;
/** Workflow status */
status: WorkflowStatus;
/** Last activity timestamp */
lastActivity: Date;
/** Execution metadata */
metadata?: Record<string, any>;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make time fields and metadata safe for JSON boundaries.

These types likely cross IPC/JSON. Use Date | string for timestamps and avoid any.

-  startedAt: Date;
+  startedAt: Date | string;
   /** Workflow status */
   status: WorkflowStatus;
   /** Last activity timestamp */
-  lastActivity: Date;
+  lastActivity: Date | string;
   /** Execution metadata */
-  metadata?: Record<string, any>;
+  metadata?: Record<string, unknown>;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
startedAt: Date;
/** Workflow status */
status: WorkflowStatus;
/** Last activity timestamp */
lastActivity: Date;
/** Execution metadata */
metadata?: Record<string, any>;
}
startedAt: Date | string;
/** Workflow status */
status: WorkflowStatus;
/** Last activity timestamp */
lastActivity: Date | string;
/** Execution metadata */
metadata?: Record<string, unknown>;
}
🤖 Prompt for AI Agents
In packages/workflow-engine/src/types/workflow.types.ts around lines 37 to 44,
the time fields and metadata currently use Date and Record<string, any> which
can break when crossing JSON/IPC boundaries; change startedAt and lastActivity
to use Date | string (so callers can pass ISO strings or Date objects) and
tighten metadata to metadata?: Record<string, unknown> (avoid any) to make the
shape JSON-safe and explicit.

Comment on lines +46 to +54
export type WorkflowStatus =
| 'pending' // Created but not started
| 'initializing' // Setting up worktree/process
| 'running' // Active execution
| 'paused' // Temporarily stopped
| 'completed' // Successfully finished
| 'failed' // Error occurred
| 'cancelled' // User cancelled
| 'timeout'; // Exceeded time limit
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: strengthen event typing with a discriminated union.

Typing WorkflowEvent.data per WorkflowEventType will catch many bugs at compile time (e.g., process.output has { line: string }).

I can draft a discriminated union map (EventMap) and derive WorkflowEvent generically if useful.

🤖 Prompt for AI Agents
In packages/workflow-engine/src/types/workflow.types.ts around lines 46 to 54,
the WorkflowEvent currently uses an untyped data field which allows mismatched
payloads (e.g., process.output should be { line: string }); define an EventMap
discriminated union mapping each WorkflowEventType string to its specific
payload shape, then change WorkflowEvent to be generic/derived from EventMap so
the data field is strongly typed per event (e.g., type EventMap = {
"process.output": { line: string }, ... }; type WorkflowEvent<T extends keyof
EventMap = keyof EventMap> = { type: T; data: EventMap[T]; ... }). Update usages
to leverage the new generic so TypeScript will enforce correct payloads for each
event type.

Comment on lines +56 to +63
export interface WorkflowEvent {
type: WorkflowEventType;
workflowId: string;
taskId: string;
timestamp: Date;
data?: any;
error?: Error;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use a serializable error shape in events instead of Error.

Error objects don’t serialize well across process boundaries. Define and use a minimal serializable shape to avoid runtime surprises.

+// Minimal serializable error representation for events
+export interface SerializedError {
+  name: string;
+  message: string;
+  stack?: string;
+  code?: string;
+}
+
 export interface WorkflowEvent {
   type: WorkflowEventType;
   workflowId: string;
   taskId: string;
   timestamp: Date;
   data?: any;
-  error?: Error;
+  error?: SerializedError;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export interface WorkflowEvent {
type: WorkflowEventType;
workflowId: string;
taskId: string;
timestamp: Date;
data?: any;
error?: Error;
}
// Minimal serializable error representation for events
export interface SerializedError {
name: string;
message: string;
stack?: string;
code?: string;
}
export interface WorkflowEvent {
type: WorkflowEventType;
workflowId: string;
taskId: string;
timestamp: Date;
data?: any;
error?: SerializedError;
}
🤖 Prompt for AI Agents
In packages/workflow-engine/src/types/workflow.types.ts around lines 56 to 63,
the WorkflowEvent currently uses Error for the error field which isn’t reliably
serializable; replace it with a minimal SerializableError interface (e.g. {
message: string; name?: string; stack?: string; code?: string|number; [key:
string]: any }) and change error?: Error to error?: SerializableError, then
update any imports/usages where WorkflowEvent is constructed or consumed to map
Error objects to this plain shape before sending across process boundaries
(populate message/name/stack/code as available).

* Public exports for worktree operations
*/

export * from './worktree-manager.js'; No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Barrel export ok; ensure subpath export exists and types are published.

Check package exports for './worktree' and include 'types' mapping; consider explicit named exports when API stabilizes.

🤖 Prompt for AI Agents
In packages/workflow-engine/src/worktree/index.ts at line 6, the barrel export
exists but the package's public exports likely don't expose the './worktree'
subpath or its types; update package.json "exports" to include a './worktree'
entry that points to the built JS file and a "types" entry that points to the
corresponding .d.ts, ensure the build outputs the .d.ts into the package (or add
an index.d.ts alongside the built JS), and confirm tsconfig and the publish
artifacts include those type files; optionally, when the API stabilizes replace
the star export with explicit named exports to make future type publishing and
export surface clearer.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +8 to +15
"exports": {
".": {
"types": "./src/index.ts",
"import": "./dist/index.js",
"require": "./dist/index.js"
}
},
"files": ["dist", "README.md"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix exports: types path mismatch and invalid "require" target

"exports['.'].types" points to src while "files" excludes src; "require" points to an ESM file. Align to dist types and drop "require" unless you ship CJS.

   "exports": {
     ".": {
-      "types": "./src/index.ts",
-      "import": "./dist/index.js",
-      "require": "./dist/index.js"
+      "types": "./dist/index.d.ts",
+      "import": "./dist/index.js"
     }
   },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"exports": {
".": {
"types": "./src/index.ts",
"import": "./dist/index.js",
"require": "./dist/index.js"
}
},
"files": ["dist", "README.md"],
"exports": {
".": {
"types": "./dist/index.d.ts",
"import": "./dist/index.js"
}
},
"files": ["dist", "README.md"],
🤖 Prompt for AI Agents
In apps/cli/package.json around lines 8 to 15, the exports block points "types"
at ./src/index.ts (which is not shipped) and sets "require" to an ESM-built
file; change "exports['.'].types" to point to the built declaration file
(./dist/index.d.ts) and remove the "require" entry unless you are shipping a CJS
build (if you do ship CJS, point "require" to the CJS bundle in dist instead);
ensure the "files" array contains dist so the .d.ts will be published.

Comment on lines +14 to +15
},
"files": ["dist", "README.md"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Add CLI bin entries so the tool is runnable (tm and task-master aliases)

Without "bin", the package won’t install an executable.

   },
+  "bin": {
+    "tm": "./dist/index.js",
+    "task-master": "./dist/index.js"
+  },

Follow-up: ensure the built entry has a shebang and tsup preserves it (e.g., shebang in src/index.ts and tsup preserveShebang).


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "=== apps/cli/package.json (first 400 lines) ==="
if [ -f apps/cli/package.json ]; then
  sed -n '1,400p' apps/cli/package.json || true
else
  echo "MISSING: apps/cli/package.json"
fi

echo
echo "=== Search for existing 'bin' in repo/package.json files ==="
rg -n --hidden --no-messages '"bin"\s*:' || true

echo
echo "=== Check for built entry apps/cli/dist/index.js ==="
if [ -f apps/cli/dist/index.js ]; then
  echo "FOUND: apps/cli/dist/index.js (first 5 lines):"
  sed -n '1,5p' apps/cli/dist/index.js || true
else
  echo "NOT FOUND: apps/cli/dist/index.js"
  if [ -d apps/cli/dist ]; then
    echo "apps/cli/dist directory listing:"
    ls -la apps/cli/dist || true
  fi
fi

echo
echo "=== Find possible source entry files under apps/cli ==="
rg -n --hidden --no-messages '(?:^|/)(index|cli)\.(ts|js|mjs|cjs)$' apps/cli || true

echo
echo "=== Print first 8 lines of candidate source files ==="
for f in $(rg -n --hidden --no-messages '(?:^|/)(index|cli)\.(ts|js|mjs|cjs)$ apps/cli --files); do
  echo "---- $f ----"
  sed -n '1,8p' "$f" || true
done

echo
echo "=== Search for shebangs in apps/cli source files ==="
rg -n --hidden '^\s*#!' apps/cli || true

echo
echo "=== Search for tsup usage/config in repo ==="
rg -n --hidden --no-messages 'tsup' || true
rg -n --hidden --no-messages 'preserveShebang' || true
rg -n --hidden --no-messages 'tsup.config' || true

echo
echo "=== Search for build scripts mentioning tsup in package.json files ==="
rg -n --hidden --no-messages '"build"\s*:\s*".*tsup' || true

Length of output: 5677


Add CLI bin entries so the tool is runnable (tm and task-master aliases)

File: apps/cli/package.json — package currently lacks "bin" (npm won't install executables). Add:

   },
+  "bin": {
+    "tm": "./dist/index.js",
+    "task-master": "./dist/index.js"
+  },

Ensure the CLI entry source has a shebang and tsup is configured to preserve it so ./dist/index.js is executable.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
},
"files": ["dist", "README.md"],
},
"bin": {
"tm": "./dist/index.js",
"task-master": "./dist/index.js"
},
"files": ["dist", "README.md"],
🤖 Prompt for AI Agents
In apps/cli/package.json around lines 14-15, add a "bin" section that maps the
executable names to the built entry (e.g., "task-master" and "tm" ->
"./dist/index.js") so npm will install CLI executables; ensure the CLI source
entry file starts with a node shebang (#!/usr/bin/env node) and update
tsup/build config to preserve that shebang in the output (use tsup's
shebang/banner option or enable shebang preservation) so ./dist/index.js is
executable after build.

Comment on lines +8 to +9
import WorkflowCommand from './workflow.command.js';

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix import: WorkflowCommand is a named export, not default.

Importing it as default will be undefined at runtime.

-import WorkflowCommand from './workflow.command.js';
+import { WorkflowCommand } from './workflow.command.js';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import WorkflowCommand from './workflow.command.js';
import { WorkflowCommand } from './workflow.command.js';
🤖 Prompt for AI Agents
In apps/cli/src/commands/index.ts around lines 8-9, the file imports
WorkflowCommand as a default but the module exports it as a named export;
replace the default import with a named import (e.g., import { WorkflowCommand }
from './workflow.command.js';), verify the exported identifier in
workflow.command.js matches exactly, and update any references if the import
name differs so the runtime no longer receives undefined.

Comment on lines +21 to +38
export function registerAllCommands(program: Command): void {
// Get all exports from this module
const commands = [
ListTasksCommand,
AuthCommand,
WorkflowCommand
// Add new commands here as they're imported above
];

commands.forEach((CommandClass) => {
if (
'register' in CommandClass &&
typeof CommandClass.register === 'function'
) {
CommandClass.register(program);
}
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: return created command instances for testing/inspection.

Returning instances helps e2e tests assert wiring without relying on Commander’s internal state.

-export function registerAllCommands(program: Command): void {
+export function registerAllCommands(program: Command) {
+	const registered: any[] = [];
 	// Get all exports from this module
 	const commands = [
 		ListTasksCommand,
 		AuthCommand,
 		WorkflowCommand
 		// Add new commands here as they're imported above
 	];
 
 	commands.forEach((CommandClass) => {
 		if (
 			'register' in CommandClass &&
 			typeof (CommandClass as any).register === 'function'
 		) {
-			CommandClass.register(program);
+			const instance = (CommandClass as any).register(program);
+			if (instance) registered.push(instance);
 		}
 	});
+	return registered;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function registerAllCommands(program: Command): void {
// Get all exports from this module
const commands = [
ListTasksCommand,
AuthCommand,
WorkflowCommand
// Add new commands here as they're imported above
];
commands.forEach((CommandClass) => {
if (
'register' in CommandClass &&
typeof CommandClass.register === 'function'
) {
CommandClass.register(program);
}
});
}
export function registerAllCommands(program: Command) {
const registered: any[] = [];
// Get all exports from this module
const commands = [
ListTasksCommand,
AuthCommand,
WorkflowCommand
// Add new commands here as they're imported above
];
commands.forEach((CommandClass) => {
if (
'register' in CommandClass &&
typeof (CommandClass as any).register === 'function'
) {
const instance = (CommandClass as any).register(program);
if (instance) registered.push(instance);
}
});
return registered;
}
🤖 Prompt for AI Agents
In apps/cli/src/commands/index.ts around lines 21 to 38, the registerAllCommands
function currently registers command classes but does not return the
instantiated command objects; change it to instantiate each command class (e.g.
new CommandClass()), call its register(program) method, collect those instances
into an array, and return that array so tests can inspect wiring; update the
function signature to return the array type and ensure any callers are updated
to accept/ignore the returned instances.

Comment on lines +42 to +45
this.addCommand(new WorkflowStartCommand('run')); // tm workflow run <task-id>
this.addCommand(new WorkflowStopCommand('kill')); // tm workflow kill <workflow-id>
this.addCommand(new WorkflowStatusCommand('info')); // tm workflow info <workflow-id>
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use static register for aliases to keep wiring consistent

Prefer the same registration path for alias commands to ensure shared options/hooks remain identical.

-    this.addCommand(new WorkflowStartCommand('run'));  // tm workflow run <task-id>
-    this.addCommand(new WorkflowStopCommand('kill'));   // tm workflow kill <workflow-id>
-    this.addCommand(new WorkflowStatusCommand('info')); // tm workflow info <workflow-id>
+    WorkflowStartCommand.register(this, 'run');   // tm workflow run <task-id>
+    WorkflowStopCommand.register(this, 'kill');   // tm workflow kill <workflow-id>
+    WorkflowStatusCommand.register(this, 'info'); // tm workflow info <workflow-id>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
this.addCommand(new WorkflowStartCommand('run')); // tm workflow run <task-id>
this.addCommand(new WorkflowStopCommand('kill')); // tm workflow kill <workflow-id>
this.addCommand(new WorkflowStatusCommand('info')); // tm workflow info <workflow-id>
}
WorkflowStartCommand.register(this, 'run'); // tm workflow run <task-id>
WorkflowStopCommand.register(this, 'kill'); // tm workflow kill <workflow-id>
WorkflowStatusCommand.register(this, 'info'); // tm workflow info <workflow-id>
}
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow.command.ts around lines 42 to 45, the alias
commands are being added via this.addCommand(new WorkflowStartCommand('run')) /
new WorkflowStopCommand('kill') / new WorkflowStatusCommand('info') which
bypasses the class-level wiring; replace these calls to use each command's
static register method (e.g. WorkflowStartCommand.register(this, 'run'),
WorkflowStopCommand.register(this, 'kill'), WorkflowStatusCommand.register(this,
'info')) so aliases are registered through the same path and inherit shared
options/hooks.

Comment on lines +17 to +22
```
TaskExecutionManager
├── WorktreeManager # Git worktree lifecycle
├── ProcessSandbox # Claude Code process management
└── WorkflowStateManager # Persistent state tracking
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add language to ASCII diagram fence (MD040)

Specify text to satisfy markdownlint.

-```
+```text
 TaskExecutionManager
 ...

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

17-17: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In packages/workflow-engine/README.md around lines 17 to 22, the Markdown code
fence for the ASCII diagram lacks a language specifier triggering MD040; update
the opening triple-backtick to include a language token such as text (e.g.,

the diagram content unchanged and keeping the closing triple-backticks as-is.

Comment on lines +40 to +55
// Start task execution
const workflowId = await manager.startTaskExecution({
id: '1.2',
title: 'Implement authentication',
description: 'Add JWT-based auth system',
status: 'pending',
priority: 'high'
});

// Monitor workflow
const workflow = manager.getWorkflowStatus(workflowId);
console.log(`Status: ${workflow.status}`);

// Stop when complete
await manager.stopTaskExecution(workflowId);
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Double-check API names and sync/async semantics in Quick Start

Ensure getWorkflowStatus signature and return type match implementation (sync vs async) and that startTaskExecution returns the documented ID.


🏁 Script executed:

#!/bin/bash
set -euo pipefail
rg -nP --glob 'packages/workflow-engine/src/**' '\bclass\s+TaskExecutionManager\b.*|getWorkflowStatus\s*\(|startTaskExecution\s*\(' -C2

Length of output: 1586


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "=== startTaskExecution (lines 90-200) ==="
sed -n '90,200p' packages/workflow-engine/src/task-execution/task-execution-manager.ts || true
echo
echo "=== getWorkflowStatus (lines 286-305) ==="
sed -n '286,305p' packages/workflow-engine/src/task-execution/task-execution-manager.ts || true
echo
echo "=== 'return' occurrences in the file (context) ==="
rg -n --glob 'packages/workflow-engine/src/**' '\breturn\b' -n packages/workflow-engine/src/task-execution/task-execution-manager.ts || true
echo
echo "=== occurrences of 'workflowId' in the file ==="
rg -n --glob 'packages/workflow-engine/src/**' '\bworkflowId\b' packages/workflow-engine/src || true

Length of output: 18148


Double-check API names and sync/async semantics in Quick Start

Verified: startTaskExecution is async and returns Promise (it returns a workflowId). getWorkflowStatus is synchronous and returns WorkflowExecutionContext | undefined — guard against undefined before accessing workflow.status.

Location: packages/workflow-engine/README.md (lines 40–55)

🤖 Prompt for AI Agents
In packages/workflow-engine/README.md around lines 40 to 55, the quick-start
uses startTaskExecution correctly as async but reads getWorkflowStatus without
guarding for its possibly undefined return; update the README example to call
manager.getWorkflowStatus(workflowId) then check for undefined (e.g., if
(!workflow) { handle/log and return/continue } else access workflow.status) so
you never dereference undefined; keep startTaskExecution and stopTaskExecution
as awaited async calls.

Comment on lines +319 to +341
```bash
# Check git version (requires 2.5+)
git --version

# Verify project is git repository
git status
```

2. **Claude Code Not Found**
```bash
# Check Claude installation
which claude

# Set custom path
export TASKMASTER_CLAUDE_PATH=/path/to/claude
```

3. **Permission Errors**
```bash
# Check worktree directory permissions
chmod -R 755 ./worktrees
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Surround shell code fences with blank lines (MD031)

Add blank lines before and after the fenced blocks in Troubleshooting section.

-   ```bash
+   
+   ```bash
    # Check git version (requires 2.5+)
    git --version
    ...
-   ```
+   ```
+
-   ```bash
+   
+   ```bash
    # Check Claude installation
    which claude
    ...
-   ```
+   ```
+
-   ```bash
+   
+   ```bash
    # Check worktree directory permissions
    chmod -R 755 ./worktrees
-   ```
+   ```
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

319-319: Fenced code blocks should be surrounded by blank lines

(MD031, blanks-around-fences)


328-328: Fenced code blocks should be surrounded by blank lines

(MD031, blanks-around-fences)


337-337: Fenced code blocks should be surrounded by blank lines

(MD031, blanks-around-fences)

🤖 Prompt for AI Agents
packages/workflow-engine/README.md around lines 319 to 341: the fenced bash code
blocks in the Troubleshooting section are not surrounded by blank lines (MD031);
add a blank line before each opening ```bash fence and a blank line after each
closing ``` fence so every fenced block is separated from surrounding text,
ensuring consistent Markdown formatting.


## License

MIT WITH Commons-Clause No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Terminate file with a single trailing newline (MD047)

-MIT WITH Commons-Clause
+MIT WITH Commons-Clause
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MIT WITH Commons-Clause
MIT WITH Commons-Clause
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

371-371: Files should end with a single newline character

(MD047, single-trailing-newline)

🤖 Prompt for AI Agents
packages/workflow-engine/README.md around line 371: the file currently ends
without a trailing newline which violates MD047; add a single newline character
at the end of the file so the final line ("MIT WITH Commons-Clause") is followed
by one trailing newline and save the file.

Comment on lines +17 to +20
import { log, readJSON } from './utils.js';
// Import command registry from @tm/cli
import { registerAllCommands } from '@tm/cli';

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Registry import: ensure package resolution and align tag-resolution guideline.

  • Verify @tm/cli is resolvable at runtime from this script (Node path/workspace alias). If this runs outside TS build, ensure it points to built JS.
  • Guidelines state command files must import and use getCurrentTag. You’re using taskMaster.getCurrentTag(). Either (a) import getCurrentTag and apply the documented pattern options.tag || getCurrentTag(projectRoot) || 'master', or (b) update the guideline to accept the TaskMaster facade pattern and ensure it follows the same precedence internally.
🤖 Prompt for AI Agents
In scripts/modules/commands.js around lines 17 to 20, the import from '@tm/cli'
may not resolve at runtime and the file violates the tag-resolution guideline by
calling taskMaster.getCurrentTag(); ensure the module path points to the built
JS entry (or adjust Node resolution/aliases) so @tm/cli is resolvable when
running outside the TS build, and change the tag handling to follow the
documented pattern: import getCurrentTag from @tm/cli and use options.tag ||
getCurrentTag(projectRoot) || 'master' (or, if you prefer to keep the TaskMaster
facade, update the facade to implement that same precedence internally and
document it).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

3. **Document Changes**: Update this roadmap as you go
4. **Communicate**: Discuss in PRs if architecture needs adjustment

This is a living document - update as the migration progresses! No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add “Breaking Changes & Deprecations” and “Rollback” sections for clarity.

Documenting (even “None yet”) aligns with our guidelines and eases adoption.

Apply this diff to append sections:

+ 
+ ## Breaking Changes & Deprecations
+ 
+ As of 2025-09-11: None. This section will enumerate any CLI flags, config keys, or APIs removed/changed, with migration steps.
+ 
+ ## Rollback & Recovery
+ 
+ - Adapters/bridges allow immediate fallback to legacy implementations.
+ - If a migrated command regresses, disable the new path and track via this roadmap.
+ - Always keep a restorable backup of user config/state during migrations.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
This is a living document - update as the migration progresses!
This is a living document - update as the migration progresses!
## Breaking Changes & Deprecations
As of 2025-09-11: None. This section will enumerate any CLI flags, config keys, or APIs removed/changed, with migration steps.
## Rollback & Recovery
- Adapters/bridges allow immediate fallback to legacy implementations.
- If a migrated command regresses, disable the new path and track via this roadmap.
- Always keep a restorable backup of user config/state during migrations.
🤖 Prompt for AI Agents
.taskmaster/docs/MIGRATION-ROADMAP.md around line 188: the reviewer asks to
append two new headings "Breaking Changes & Deprecations" and "Rollback" (even
if empty) to improve clarity and align with documentation guidelines; update the
document by adding both section headers after the existing line with brief
placeholder content such as "None yet" or a short guidance sentence for each, so
future migration notes can be populated and readers know where to find
deprecation details and rollback procedures.

Comment on lines +35 to +41
export interface ListTasksResult {
tasks: Task[];
total: number;
filtered: number;
tag?: string;
storageType: 'file' | 'api';
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Reuse core type to avoid drift

If @tm/core already exports ListTasksResult, import it instead of redefining here.

Run to confirm export exists:


🏁 Script executed:

#!/bin/bash
rg -nP 'export\s+(type|interface)\s+ListTasksResult\b' packages/tm-core -C2

Length of output: 0


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "PWD: $(pwd)"
echo
echo "---- Searching for symbol 'ListTasksResult' across repo ----"
rg -n --hidden --no-ignore-vcs -S '\bListTasksResult\b' -C2 || true

echo
echo "---- Searching for explicit export declarations of ListTasksResult (type|interface) ----"
rg -nP --hidden --no-ignore-vcs 'export\s+(type|interface)\s+ListTasksResult\b' -C2 || true

echo
echo "---- Searching for re-exports / grouped exports mentioning ListTasksResult ----"
rg -nP --hidden --no-ignore-vcs 'export\s+\{[^}]*ListTasksResult[^}]*\}' -C2 || true
rg -nP --hidden --no-ignore-vcs 'export\s+type\s+\{[^}]*ListTasksResult[^}]*\}' -C2 || true

echo
echo "---- Locating any 'tm-core' package directories under packages/ ----"
find packages -maxdepth 3 -type d -name 'tm-core' -print -exec bash -c 'echo "Contents of {}:"; ls -la "{}" | sed -n "1,200p"' \; || true

echo
echo "---- If packages/tm-core exists, searching inside it explicitly ----"
if [ -d packages/tm-core ]; then
  rg -n --hidden --no-ignore-vcs -S '\bListTasksResult\b' packages/tm-core -C2 || true
  rg -nP --hidden --no-ignore-vcs 'export\s+(type|interface)\s+ListTasksResult\b' packages/tm-core -C2 || true
  rg -nP --hidden --no-ignore-vcs 'export\s+\{[^}]*ListTasksResult[^}]*\}' packages/tm-core -C2 || true
fi

echo
echo "---- Done ----"

Length of output: 9452


Import core ListTasksResult instead of redefining

tm-core already exports ListTasksResult — replace the local interface in apps/cli/src/commands/list.command.ts (lines ~35–41) with an import from the core package (e.g. import type { ListTasksResult } from '@tm/core';). Evidence: packages/tm-core/src/task-master-core.ts:28 (re-exports TaskListResult as ListTasksResult) and packages/tm-core/src/index.ts:11 (exports ListTasksResult). Remove the local interface to avoid type drift.

🤖 Prompt for AI Agents
In apps/cli/src/commands/list.command.ts around lines 35 to 41, the local
ListTasksResult interface duplicates the type exported by the tm-core package;
remove this local interface and add an import type { ListTasksResult } from
'@tm/core' at the top of the file, update any references to use the imported
type, and ensure the file's imports are sorted/organized accordingly to avoid
type drift.

Comment on lines +78 to +81
if (!this.validateOptions(options)) {
process.exit(1);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid direct process.exit; use Commander error handling for testability

Replace hard exits with Command#error so CLI can opt into exitOverride in tests.

-      if (!this.validateOptions(options)) {
-        process.exit(1);
-      }
+      if (!this.validateOptions(options)) {
+        this.error('Invalid options', { exitCode: 1 });
+        return;
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!this.validateOptions(options)) {
process.exit(1);
}
if (!this.validateOptions(options)) {
this.error('Invalid options', { exitCode: 1 });
return;
}
🤖 Prompt for AI Agents
In apps/cli/src/commands/list.command.ts around lines 78 to 81, replace the
direct process.exit(1) call used when validateOptions fails with Commander-style
error handling so tests can opt into exitOverride; call the command's error
method (e.g., this.error('Invalid options', { exitCode: 1 }) or throw a
CommanderError) instead of process.exit so the CLI surface reports the failure
through Commander and remains testable.

Comment on lines +91 to +95
// Display results
if (!options.silent) {
this.displayResults(result, options);
}
} catch (error: any) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Always release resources (close tm-core) on success and error

Call cleanup on both paths to prevent handle leaks in tests/REPL usage.

       if (!options.silent) {
         this.displayResults(result, options);
       }
+      // Always cleanup on success
+      await this.cleanup();
     } catch (error: any) {
-      const msg = error?.getSanitizedDetails?.() ?? {
-        message: error?.message ?? String(error)
-      };
-      console.error(chalk.red(`Error: ${msg.message || 'Unexpected error'}`));
-      if (error.stack && process.env.DEBUG) {
-        console.error(chalk.gray(error.stack));
-      }
-      process.exit(1);
+      const msg = error?.getSanitizedDetails?.() ?? { message: error?.message ?? String(error) };
+      if (error.stack && process.env.DEBUG) console.error(chalk.gray(error.stack));
+      await this.cleanup();
+      this.error(`Error: ${msg.message || 'Unexpected error'}`, { exitCode: 1 });
     }

Also applies to: 100-104

🤖 Prompt for AI Agents
In apps/cli/src/commands/list.command.ts around lines 91 to 95 (and apply same
to 100-104), the code currently only calls displayResults on success and does
not release tm-core when returning or throwing; modify the function to always
release/cleanup tm-core by moving the cleanup call into a finally block (or
equivalent) that runs after success or error, await the close/cleanup method and
guard for tm-core being non-null to avoid runtime errors, and rethrow or
propagate the original error after cleanup so behavior is unchanged.


import { Command } from 'commander';
import chalk from 'chalk';
import path from 'node:path';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Align with team preference: avoid node: scheme for core modules

Per retrieved learnings, prefer bare imports for Node core modules.

-import path from 'node:path';
+import path from 'path';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import path from 'node:path';
import path from 'path';
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-start.command.ts around line 8, the
import uses the node: scheme ("import path from 'node:path';") which the team
prefers to avoid; change it to a bare core module import by replacing it with
"import path from 'path';" (or the equivalent require style if codebase uses
commonjs) and ensure any linter/tsconfig settings still accept bare imports.

Comment on lines +17 to +23
// Placeholder exports - these will be implemented in later tasks
export interface StorageAdapter {
read(path: string): Promise<string | null>;
write(path: string, data: string): Promise<void>;
exists(path: string): Promise<boolean>;
delete(path: string): Promise<void>;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider scoping the placeholder adapter to internal modules

Exporting both IStorage and a separate StorageAdapter can confuse consumers; keep the placeholder internal (or mark it @internal) until real adapters land.

Also applies to: 28-46

🤖 Prompt for AI Agents
In packages/tm-core/src/storage/index.ts around lines 17-23 (and similarly
28-46), the placeholder StorageAdapter interface is exported which can confuse
consumers; make the placeholder internal by either removing the export (make it
file-private/unexported) or annotating it with a JSDoc @internal tag and update
any references accordingly so only the intended public type (e.g., IStorage) is
exported; ensure public API exports remain unchanged and add a short comment
indicating the adapter is a temporary/internal placeholder.

Comment on lines +31 to +33
async read(path: string): Promise<string | null> {
return this.data.get(path) || null;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix falsey read values being coerced to null

|| null will treat empty string as null. Use nullish coalescing.

-  async read(path: string): Promise<string | null> {
-    return this.data.get(path) || null;
-  }
+  async read(path: string): Promise<string | null> {
+    return this.data.get(path) ?? null;
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async read(path: string): Promise<string | null> {
return this.data.get(path) || null;
}
async read(path: string): Promise<string | null> {
return this.data.get(path) ?? null;
}
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/index.ts around lines 31 to 33, the read()
method uses "|| null" which coerces falsy but valid values (like empty string)
to null; change the fallback to the nullish coalescing operator so that only
undefined becomes null (replace "return this.data.get(path) || null;" with a
nullish coalescing fallback).

Comment on lines +309 to +317
const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), 'tm-empty-'));
const emptyCore = createTaskMasterCore(emptyDir);

try {
const result = await emptyCore.listTasks();

expect(result.tasks).toHaveLength(0);
expect(result.total).toBe(0);
expect(result.filtered).toBe(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize core before use in this test

emptyCore is used without initialize(); this can cause nondeterminism depending on implementation.

       const emptyCore = createTaskMasterCore(emptyDir);
+      await emptyCore.initialize();
 
       try {
         const result = await emptyCore.listTasks();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), 'tm-empty-'));
const emptyCore = createTaskMasterCore(emptyDir);
try {
const result = await emptyCore.listTasks();
expect(result.tasks).toHaveLength(0);
expect(result.total).toBe(0);
expect(result.filtered).toBe(0);
const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), 'tm-empty-'));
const emptyCore = createTaskMasterCore(emptyDir);
await emptyCore.initialize();
try {
const result = await emptyCore.listTasks();
expect(result.tasks).toHaveLength(0);
expect(result.total).toBe(0);
expect(result.filtered).toBe(0);
🤖 Prompt for AI Agents
In packages/tm-core/tests/integration/list-tasks.test.ts around lines 309 to
317, the test creates emptyCore but calls listTasks() before initializing it,
which can cause nondeterministic behavior; fix by awaiting
emptyCore.initialize() (or the appropriate init method) immediately after
createTaskMasterCore(emptyDir) and before calling emptyCore.listTasks(),
ensuring the core is fully initialized before use.

Comment on lines +371 to +405
beforeEach(async () => {
// Create tasks for a different tag
const taggedTasks = [
{
id: 'tag-1',
title: 'Tagged task',
description: 'Task with tag',
status: 'pending' as TaskStatus,
priority: 'medium' as const,
dependencies: [],
details: 'Tagged task details',
testStrategy: 'Test',
subtasks: []
}
];

const tagFile = path.join(
tmpDir,
'.taskmaster',
'tasks',
'feature-branch.json'
);
await fs.writeFile(
tagFile,
JSON.stringify({
tasks: taggedTasks,
metadata: {
version: '1.0.0',
lastModified: new Date().toISOString(),
taskCount: 1,
completedCount: 0
}
})
);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Tag-file behavior depends on storage strategy—confirm contract

This suite creates feature-branch.json directly. Ensure the storage layer officially supports per-tag files (vs single tasks.json) and document precedence/merge semantics.

🤖 Prompt for AI Agents
packages/tm-core/tests/integration/list-tasks.test.ts lines 371-405: the test
writes a per-tag file (feature-branch.json) directly but the storage contract
may only support a single tasks.json or have specific precedence/merge
semantics; verify the storage strategy and change the test to use the official
storage API (or a helper) to create tag-scoped tasks if the implementation
supports per-tag files, otherwise revert to creating tasks via the canonical
single-file format; update or add a short comment in the test describing the
expected precedence/merge behavior (which file wins) and, if needed, add an
assertion that validates the storage layer’s actual precedence/merge behavior so
the test aligns with the documented contract.

Comment on lines +415 to +420
it('should list default tasks when no tag specified', async () => {
const result = await tmCore.listTasks();

expect(result.tasks).toHaveLength(5);
expect(result.tasks[0].id).toBe('1');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid order-dependent assertion

Don’t assume first item ordering; assert membership instead.

       it('should list default tasks when no tag specified', async () => {
         const result = await tmCore.listTasks();
 
-        expect(result.tasks).toHaveLength(5);
-        expect(result.tasks[0].id).toBe('1');
+        expect(result.tasks).toHaveLength(5);
+        expect(result.tasks.some((t) => t.id === '1')).toBe(true);
       });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should list default tasks when no tag specified', async () => {
const result = await tmCore.listTasks();
expect(result.tasks).toHaveLength(5);
expect(result.tasks[0].id).toBe('1');
});
it('should list default tasks when no tag specified', async () => {
const result = await tmCore.listTasks();
expect(result.tasks).toHaveLength(5);
expect(result.tasks.some((t) => t.id === '1')).toBe(true);
});
🤖 Prompt for AI Agents
In packages/tm-core/tests/integration/list-tasks.test.ts around lines 415 to
420, the test currently assumes a specific ordering (checking result.tasks[0].id
=== '1'); change it to assert membership instead: keep the length assertion,
then derive the array of task ids (e.g., map result.tasks to ids) and assert
that the ids include '1' (use toContain or expect.arrayContaining) so the test
no longer depends on item order.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...


import { Command } from 'commander';
import chalk from 'chalk';
import path from 'node:path';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Match import style preference (avoid node: scheme).

-import path from 'node:path';
+import path from 'path';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import path from 'node:path';
import path from 'path';
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-stop.command.ts around line 8, the
import uses the node: scheme ("node:path"); change it to the plain Node core
import ("path") to match project import style—replace import path from
'node:path' with import path from 'path' and run lint/typecheck to ensure no
other occurrences need updating.

Comment on lines +135 to +145
// Confirm if not forced
if (!options.force && ['running', 'paused'].includes(workflow.status)) {
const shouldProceed = await ui.confirm(
`Are you sure you want to stop this ${workflow.status} workflow?`
);

if (!shouldProceed) {
console.log(chalk.gray('Operation cancelled'));
return;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider prompting for other active states too.

You confirm only for running/paused. Decide if pending/initializing should also prompt.

🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-stop.command.ts around lines 135 to
145, the confirmation prompt only checks for 'running' and 'paused' states but
the reviewer suggests also prompting for other active states like 'pending' and
'initializing'; update the conditional to include those states (e.g.,
['running','paused','pending','initializing']) so the UI.confirm is triggered
for any active workflow state you consider sensitive, and keep the existing
force bypass and cancellation handling unchanged.

Comment on lines +147 to +153
// Stop the workflow
ui.displaySpinner('Stopping workflow and cleaning up resources...');

await this.workflowManager!.stopTaskExecution(
actualWorkflowId,
options.force
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Ensure spinner lifecycle ends on both success and error paths.

Confirm ui.displaySpinner stops/clears; otherwise wrap with a try/finally and explicit .success()/.fail().

I can patch this once I see the spinner API; point me to apps/cli/src/utils/ui.ts.

🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-stop.command.ts around lines 147 to
153, the spinner started with ui.displaySpinner('Stopping workflow and cleaning
up resources...') may not be stopped on error; wrap the await
this.workflowManager!.stopTaskExecution(...) in a try/finally so the spinner is
always cleared, calling spinner.success() in the try when the stop completes and
spinner.fail() in the catch (or call fail() in finally if an exception
occurred), or at minimum call spinner.clear()/stop() in finally; reference the
spinner methods from apps/cli/src/utils/ui.ts to use the correct API.

Comment on lines +199 to +218
ui.displaySpinner('Stopping all workflows...');

let stopped = 0;
let failed = 0;

for (const workflow of activeWorkflows) {
try {
const workflowId = `workflow-${workflow.taskId}`;
await this.workflowManager!.stopTaskExecution(
workflowId,
options.force
);
stopped++;
} catch (error) {
console.error(
`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
);
failed++;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: stop workflows concurrently to reduce wait time.

Safe since each stop is isolated; errors are aggregated.

-    let stopped = 0;
-    let failed = 0;
-
-    for (const workflow of activeWorkflows) {
-      try {
-        const workflowId = `workflow-${workflow.taskId}`;
-        await this.workflowManager!.stopTaskExecution(
-          workflowId,
-          options.force
-        );
-        stopped++;
-      } catch (error) {
-        console.error(
-          `${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
-        );
-        failed++;
-      }
-    }
+    const results = await Promise.allSettled(
+      activeWorkflows.map((w) =>
+        this.workflowManager!.stopTaskExecution(`workflow-${w.taskId}`, options.force)
+      )
+    );
+    const stopped = results.filter((r) => r.status === 'fulfilled').length;
+    const failed = results.length - stopped;
+    results.forEach((r, i) => {
+      if (r.status === 'rejected') {
+        const msg = (r.reason as any)?.message ?? String(r.reason);
+        console.error(`${chalk.red('✗')} Failed to stop workflow ${activeWorkflows[i].taskId}: ${msg}`);
+      }
+    });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ui.displaySpinner('Stopping all workflows...');
let stopped = 0;
let failed = 0;
for (const workflow of activeWorkflows) {
try {
const workflowId = `workflow-${workflow.taskId}`;
await this.workflowManager!.stopTaskExecution(
workflowId,
options.force
);
stopped++;
} catch (error) {
console.error(
`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
);
failed++;
}
}
ui.displaySpinner('Stopping all workflows...');
const results = await Promise.allSettled(
activeWorkflows.map((w) =>
this.workflowManager!.stopTaskExecution(`workflow-${w.taskId}`, options.force)
)
);
const stopped = results.filter((r) => r.status === 'fulfilled').length;
const failed = results.length - stopped;
results.forEach((r, i) => {
if (r.status === 'rejected') {
const msg = (r.reason as any)?.message ?? String(r.reason);
console.error(`${chalk.red('✗')} Failed to stop workflow ${activeWorkflows[i].taskId}: ${msg}`);
}
});
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-stop.command.ts around lines 199 to
218, the loop stops workflows sequentially which increases wait time; convert
this to concurrent stops using Promise.allSettled: build an array of stop
promises (calling this.workflowManager!.stopTaskExecution for each workflow with
the computed workflowId and options.force), await Promise.allSettled on that
array, then iterate results to count stopped vs failed and log errors for
rejected results (including workflow.taskId context); ensure any UI spinner or
final summary logic runs after all promises settle.

Comment on lines +213 to +216
console.error(
`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
);
failed++;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Log error messages, not raw objects.

-        console.error(
-          `${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
-        );
+        const msg = (error as any)?.message ?? String(error);
+        console.error(`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${msg}`);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.error(
`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${error}`
);
failed++;
const msg = (error as any)?.message ?? String(error);
console.error(`${chalk.red('✗')} Failed to stop workflow ${workflow.taskId}: ${msg}`);
failed++;
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-stop.command.ts around lines 213 to
216, the code currently interpolates a raw error object into the log which can
produce "[object Object]" or include non-informative output; change the logging
to output a string representation of the error (for example use error instanceof
Error ? error.message : String(error)) or JSON.stringify/inspect the error to
include message and stack, and pass that string into console.error (preserve the
existing chalk formatting and increment of failed).

Comment on lines +70 to +79
function generateRandomString(length: number): string {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
const bytes = randomBytes(length);
let result = '';

for (let i = 0; i < length; i++) {
result += chars[bytes[i] % chars.length];
}

return result;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove modulo bias in generateRandomString().

Modulo 36 over 256 introduces slight bias. Use rejection sampling.

- function generateRandomString(length: number): string {
-  const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
-  const bytes = randomBytes(length);
-  let result = '';
-
-  for (let i = 0; i < length; i++) {
-    result += chars[bytes[i] % chars.length];
-  }
-
-  return result;
- }
+ function generateRandomString(length: number): string {
+  const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
+  const base = chars.length; // 36
+  const maxMultiple = Math.floor(256 / base) * base; // 252
+  let result = '';
+  while (result.length < length) {
+    const buf = randomBytes(length); // oversample; small loop, short length
+    for (let i = 0; i < buf.length && result.length < length; i++) {
+      const v = buf[i];
+      if (v < maxMultiple) {
+        result += chars[v % base];
+      }
+    }
+  }
+  return result;
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function generateRandomString(length: number): string {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
const bytes = randomBytes(length);
let result = '';
for (let i = 0; i < length; i++) {
result += chars[bytes[i] % chars.length];
}
return result;
function generateRandomString(length: number): string {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
const base = chars.length; // 36
const maxMultiple = Math.floor(256 / base) * base; // 252
let result = '';
while (result.length < length) {
const buf = randomBytes(length); // oversample; small loop, short length
for (let i = 0; i < buf.length && result.length < length; i++) {
const v = buf[i];
if (v < maxMultiple) {
result += chars[v % base];
}
}
}
return result;
}
🤖 Prompt for AI Agents
In packages/tm-core/src/utils/id-generator.ts around lines 70 to 79, the current
generateRandomString() uses bytes[i] % chars.length which causes modulo bias;
replace it with rejection sampling: generate random bytes and for each needed
character, loop until you get a byte < 252 (floor(256 / 36) * 36) then use byte
% 36 as the index into chars; continue until the requested length is reached,
requesting more random bytes as necessary, and preserve the function signature
and charset.

Comment on lines +6 to +8
import { spawn } from 'node:child_process';
import { promises as fs } from 'node:fs';
import path from 'node:path';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use bare imports instead of node: scheme for consistency.

Based on the retrieved learnings, the user Crunchyman-ralph prefers not to use node: scheme imports.

Apply this diff:

-import { spawn } from 'node:child_process';
-import { promises as fs } from 'node:fs';
-import path from 'node:path';
+import { spawn } from 'child_process';
+import { promises as fs } from 'fs';
+import path from 'path';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { spawn } from 'node:child_process';
import { promises as fs } from 'node:fs';
import path from 'node:path';
import { spawn } from 'child_process';
import { promises as fs } from 'fs';
import path from 'path';
🤖 Prompt for AI Agents
In packages/workflow-engine/src/worktree/worktree-manager.ts around lines 6 to
8, the imports use the "node:" scheme (import { spawn } from
'node:child_process'; import { promises as fs } from 'node:fs'; import path from
'node:path';) but the project prefers bare imports; replace them with
bare-module imports: import { spawn } from 'child_process'; import { promises as
fs } from 'fs'; import path from 'path'; and update any other occurrences in the
file to match, then run lint/TS checks to ensure no import-type errors remain.

Comment on lines +253 to +259
if (worktree.taskId) {
try {
await this.removeWorktree(worktree.taskId, force);
} catch (error) {
console.error(`Failed to cleanup worktree for task ${worktree.taskId}:`, error);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing taskId assignment causes removeWorktree to fail.

The cleanupAll method calls removeWorktree with worktree.taskId, but the taskId field is not properly populated when listing worktrees. The listWorktrees method filters for task worktrees but doesn't guarantee the taskId field is set correctly in the returned objects.

The issue is that completeWorktreeInfo at line 331 tries to extract taskId from the branch name, but if that fails, it falls back to partial.taskId || 'unknown'. Since partial comes from parsing git output which doesn't include taskId, this will always result in 'unknown' for listed worktrees.

Apply this diff to properly handle the taskId extraction:

 async cleanupAll(force = false): Promise<void> {
     const worktrees = await this.listWorktrees();
     
     for (const worktree of worktrees) {
-        if (worktree.taskId) {
+        // Extract taskId from the path if not available
+        let taskId = worktree.taskId;
+        if (!taskId || taskId === 'unknown') {
+            const match = worktree.path.match(/task-(.+?)$/);
+            taskId = match?.[1];
+        }
+        
+        if (taskId && taskId !== 'unknown') {
             try {
-                await this.removeWorktree(worktree.taskId, force);
+                // Check if this worktree is in our active map
+                if (this.activeWorktrees.has(taskId)) {
+                    await this.removeWorktree(taskId, force);
+                } else {
+                    // Directly remove via git command for orphaned worktrees
+                    const args = ['worktree', 'remove', worktree.path];
+                    if (force) args.push('--force');
+                    await this.executeGitCommand(args, { cwd: this.config.projectRoot });
+                }
             } catch (error) {
-                console.error(`Failed to cleanup worktree for task ${worktree.taskId}:`, error);
+                console.error(`Failed to cleanup worktree for task ${taskId}:`, error);
             }
         }
     }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (worktree.taskId) {
try {
await this.removeWorktree(worktree.taskId, force);
} catch (error) {
console.error(`Failed to cleanup worktree for task ${worktree.taskId}:`, error);
}
}
// Extract taskId from the path if not available
let taskId = worktree.taskId;
if (!taskId || taskId === 'unknown') {
const match = worktree.path.match(/task-(.+?)$/);
taskId = match?.[1];
}
if (taskId && taskId !== 'unknown') {
try {
// Check if this worktree is in our active map
if (this.activeWorktrees.has(taskId)) {
await this.removeWorktree(taskId, force);
} else {
// Directly remove via git command for orphaned worktrees
const args = ['worktree', 'remove', worktree.path];
if (force) args.push('--force');
await this.executeGitCommand(args, { cwd: this.config.projectRoot });
}
} catch (error) {
console.error(`Failed to cleanup worktree for task ${taskId}:`, error);
}
}
🤖 Prompt for AI Agents
packages/workflow-engine/src/worktree/worktree-manager.ts around lines 253 to
259: cleanupAll calls removeWorktree with worktree.taskId but listWorktrees
returns objects whose taskId is always 'unknown' because completeWorktreeInfo
falls back to 'unknown' when branch parsing fails; change completeWorktreeInfo
(around line 331) to derive and assign taskId from the branch name
deterministically (parse branch name and set partial.taskId) and stop forcing a
literal 'unknown' fallback — instead return undefined/null when parsing fails so
callers can skip or re-derive the id; update listWorktrees to populate taskId
from the parsed branch (or skip entries without a valid taskId) so
removeWorktree gets a real taskId in cleanupAll.

Comment on lines +323 to +336
private completeWorktreeInfo(partial: Partial<WorktreeInfo>): WorktreeInfo {
const branch = partial.branch || 'unknown';
const taskIdMatch = branch.match(/^task\/(.+?)-/);

return {
path: partial.path || '',
branch,
createdAt: partial.createdAt || new Date(),
taskId: taskIdMatch?.[1] || partial.taskId || 'unknown',
commit: partial.commit,
locked: partial.locked || false,
lockReason: partial.lockReason
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Complex taskId extraction logic could be simplified.

The completeWorktreeInfo method has complex logic for extracting taskId from branch names that could be made more robust.

Consider using a more explicit approach:

 private completeWorktreeInfo(partial: Partial<WorktreeInfo>): WorktreeInfo {
     const branch = partial.branch || 'unknown';
-    const taskIdMatch = branch.match(/^task\/(.+?)-/);
+    
+    // Try multiple patterns to extract taskId
+    let taskId: string | undefined;
+    
+    // First, check if it's already in activeWorktrees
+    for (const [id, info] of this.activeWorktrees) {
+        if (info.path === partial.path) {
+            taskId = id;
+            break;
+        }
+    }
+    
+    // If not found, extract from branch name
+    if (!taskId) {
+        const taskIdMatch = branch.match(/^task\/(.+?)-\d+$/);
+        taskId = taskIdMatch?.[1];
+    }
+    
+    // Finally, try to extract from path
+    if (!taskId && partial.path) {
+        const pathMatch = partial.path.match(/task-(.+?)$/);
+        taskId = pathMatch?.[1];
+    }
     
     return {
         path: partial.path || '',
         branch,
         createdAt: partial.createdAt || new Date(),
-        taskId: taskIdMatch?.[1] || partial.taskId || 'unknown',
+        taskId: taskId || 'unknown',
         commit: partial.commit,
         locked: partial.locked || false,
         lockReason: partial.lockReason
     };
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private completeWorktreeInfo(partial: Partial<WorktreeInfo>): WorktreeInfo {
const branch = partial.branch || 'unknown';
const taskIdMatch = branch.match(/^task\/(.+?)-/);
return {
path: partial.path || '',
branch,
createdAt: partial.createdAt || new Date(),
taskId: taskIdMatch?.[1] || partial.taskId || 'unknown',
commit: partial.commit,
locked: partial.locked || false,
lockReason: partial.lockReason
};
}
private completeWorktreeInfo(partial: Partial<WorktreeInfo>): WorktreeInfo {
const branch = partial.branch || 'unknown';
// Try multiple patterns to extract taskId
let taskId: string | undefined;
// First, check if it's already in activeWorktrees
for (const [id, info] of this.activeWorktrees) {
if (info.path === partial.path) {
taskId = id;
break;
}
}
// If not found, extract from branch name
if (!taskId) {
const taskIdMatch = branch.match(/^task\/(.+?)-\d+$/);
taskId = taskIdMatch?.[1];
}
// Finally, try to extract from path
if (!taskId && partial.path) {
const pathMatch = partial.path.match(/task-(.+?)$/);
taskId = pathMatch?.[1];
}
return {
path: partial.path || '',
branch,
createdAt: partial.createdAt || new Date(),
taskId: taskId || 'unknown',
commit: partial.commit,
locked: partial.locked || false,
lockReason: partial.lockReason
};
}
🤖 Prompt for AI Agents
In packages/workflow-engine/src/worktree/worktree-manager.ts around lines 323 to
336, the taskId extraction from the branch string is fragile and overly complex;
update the logic to first prefer partial.taskId when present, otherwise
normalize the branch (default to 'unknown') and use a clearer regex like
/^task\/([^\/-]+)/ to capture the task id (so it works for "task/ID",
"task/ID-..." and avoids empty matches), then fall back to 'unknown'; keep the
rest of the completed WorktreeInfo properties as-is and ensure locked defaults
to false.

Comment on lines +342 to +350
const cleanup = () => {
console.log('Cleaning up worktrees...');
this.cleanupAll(true).catch(console.error);
};

process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
process.on('exit', cleanup);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential race condition in cleanup handlers.

Multiple cleanup handlers are registered for the same events, and the cleanup is called synchronously in the exit handler which could block the process from exiting properly.

The cleanup should be more defensive:

 private setupCleanupHandlers(): void {
+    let cleanupInProgress = false;
     const cleanup = () => {
+        if (cleanupInProgress) return;
+        cleanupInProgress = true;
         console.log('Cleaning up worktrees...');
-        this.cleanupAll(true).catch(console.error);
+        // Don't await in exit handler to avoid blocking
+        this.cleanupAll(true)
+            .catch(console.error)
+            .finally(() => { cleanupInProgress = false; });
     };
 
-    process.on('SIGINT', cleanup);
-    process.on('SIGTERM', cleanup);
-    process.on('exit', cleanup);
+    process.once('SIGINT', cleanup);
+    process.once('SIGTERM', cleanup);
+    process.once('exit', cleanup);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const cleanup = () => {
console.log('Cleaning up worktrees...');
this.cleanupAll(true).catch(console.error);
};
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
process.on('exit', cleanup);
}
private setupCleanupHandlers(): void {
let cleanupInProgress = false;
const cleanup = () => {
if (cleanupInProgress) return;
cleanupInProgress = true;
console.log('Cleaning up worktrees...');
// Don't await in exit handler to avoid blocking
this.cleanupAll(true)
.catch(console.error)
.finally(() => { cleanupInProgress = false; });
};
process.once('SIGINT', cleanup);
process.once('SIGTERM', cleanup);
process.once('exit', cleanup);
}
🤖 Prompt for AI Agents
In packages/workflow-engine/src/worktree/worktree-manager.ts around lines 342 to
350, multiple handlers are adding non-idempotent cleanup calls and the exit
handler invokes async cleanup synchronously which can block exit; change
handlers to register once (process.once) not on, add a boolean "cleanupCalled"
guard so cleanup runs only once, make cleanup return a Promise and avoid
awaiting it inside the 'exit' handler (instead trigger it asynchronously via
setImmediate or process.nextTick and catch/log errors), and ensure any
long-running async work is awaited only in SIGINT/SIGTERM handlers while the
'exit' handler performs a minimal synchronous-safe teardown or just kicks off
the async cleanup without blocking process termination.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +187 to +204
if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt);
const now = new Date();
const hoursRemaining = Math.floor(
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
);

if (hoursRemaining > 0) {
console.log(
chalk.gray(
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Expired/remaining time calculation is wrong near the hour boundary.

If <1h remains, hoursRemaining is 0 and you print “expired”. Compare timestamps instead and show a friendly delta.

-      if (hoursRemaining > 0) {
-        console.log(
-          chalk.gray(
-            `  Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
-          )
-        );
-      } else {
-        console.log(
-          chalk.yellow(`  Token expired at: ${expiresAt.toLocaleString()}`)
-        );
-      }
+      const msRemaining = expiresAt.getTime() - now.getTime();
+      if (msRemaining > 0) {
+        const mins = Math.round(msRemaining / 60000);
+        const human = mins >= 60 ? `${Math.floor(mins / 60)}h ${mins % 60}m` : `${mins}m`;
+        console.log(chalk.gray(`  Expires: ${expiresAt.toLocaleString()} (${human} remaining)`));
+      } else {
+        console.log(chalk.yellow(`  Token expired at: ${expiresAt.toLocaleString()}`));
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt);
const now = new Date();
const hoursRemaining = Math.floor(
(expiresAt.getTime() - now.getTime()) / (1000 * 60 * 60)
);
if (hoursRemaining > 0) {
console.log(
chalk.gray(
` Expires: ${expiresAt.toLocaleString()} (${hoursRemaining} hours remaining)`
)
);
} else {
console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
);
}
if (credentials.expiresAt) {
const expiresAt = new Date(credentials.expiresAt);
const now = new Date();
const msRemaining = expiresAt.getTime() - now.getTime();
if (msRemaining > 0) {
const mins = Math.round(msRemaining / 60000);
const human = mins >= 60 ? `${Math.floor(mins / 60)}h ${mins % 60}m` : `${mins}m`;
console.log(
chalk.gray(
` Expires: ${expiresAt.toLocaleString()} (${human} remaining)`
)
);
} else {
console.log(
chalk.yellow(` Token expired at: ${expiresAt.toLocaleString()}`)
);
}
🤖 Prompt for AI Agents
In apps/cli/src/commands/auth.command.ts around lines 187-204, the code uses
Math.floor on hoursRemaining which yields 0 when under 1 hour and incorrectly
prints "expired"; change the check to compare timestamps (expiresAt.getTime() >
now.getTime()) and compute the millisecond delta, then format a friendly
remaining string (e.g., hours and minutes, or "< 1 hour" / "X minutes") to
display when delta > 0, otherwise print the expired message; ensure you avoid
using floor for the main existence check and derive human-readable remaining
time from the positive delta.


import { Command } from 'commander';
import chalk from 'chalk';
import path from 'node:path';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: align with repo preference to avoid node: scheme for core modules.

Past preference indicates not using node: imports. Switch to bare 'path' if you want consistency.

-import path from 'node:path';
+import path from 'path';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import path from 'node:path';
import path from 'path';
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-list.command.ts around line 8, the
import uses the 'node:' scheme (import path from 'node:path') which the repo
prefers to avoid; change the import to the bare core module form by replacing
'node:path' with 'path' so the file imports path from 'path' to align with
repository style.

Comment on lines +207 to +209
paused: { icon: '⏸️', color: chalk.orange },
completed: { icon: '✅', color: chalk.green },
failed: { icon: '❌', color: chalk.red },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix invalid Chalk color usage (chalk.orange is undefined).

Chalk has no orange convenience method. Use keyword/hex or a supported color.

-      paused: { icon: '⏸️', color: chalk.orange },
+      paused: { icon: '⏸️', color: chalk.keyword('orange') },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
paused: { icon: '⏸️', color: chalk.orange },
completed: { icon: '✅', color: chalk.green },
failed: { icon: '❌', color: chalk.red },
paused: { icon: '⏸️', color: chalk.keyword('orange') },
completed: { icon: '✅', color: chalk.green },
failed: { icon: '❌', color: chalk.red },
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-list.command.ts around lines 207 to
209, the code uses chalk.orange which is not a valid Chalk method; replace it
with a supported color call such as chalk.keyword('orange') or
chalk.hex('#FFA500') (e.g., paused: { icon: '⏸️', color: chalk.keyword('orange')
}) to fix the invalid color usage.

Comment on lines +228 to +240
private formatDuration(start: Date, end: Date): string {
const diff = end.getTime() - start.getTime();
const minutes = Math.floor(diff / (1000 * 60));
const hours = Math.floor(minutes / 60);

if (hours > 0) {
return `${hours}h ${minutes % 60}m`;
} else if (minutes > 0) {
return `${minutes}m`;
} else {
return '<1m';
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make duration formatting resilient to string dates.

Workflow timestamps may be de/serialized; guard against strings to avoid runtime errors.

-  private formatDuration(start: Date, end: Date): string {
-    const diff = end.getTime() - start.getTime();
+  private formatDuration(start: Date | string, end: Date | string): string {
+    const startMs = new Date(start as any).getTime();
+    const endMs = new Date(end as any).getTime();
+    const diff = endMs - startMs;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private formatDuration(start: Date, end: Date): string {
const diff = end.getTime() - start.getTime();
const minutes = Math.floor(diff / (1000 * 60));
const hours = Math.floor(minutes / 60);
if (hours > 0) {
return `${hours}h ${minutes % 60}m`;
} else if (minutes > 0) {
return `${minutes}m`;
} else {
return '<1m';
}
}
private formatDuration(start: Date | string, end: Date | string): string {
const startMs = new Date(start as any).getTime();
const endMs = new Date(end as any).getTime();
const diff = endMs - startMs;
const minutes = Math.floor(diff / (1000 * 60));
const hours = Math.floor(minutes / 60);
if (hours > 0) {
return `${hours}h ${minutes % 60}m`;
} else if (minutes > 0) {
return `${minutes}m`;
} else {
return '<1m';
}
}
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-list.command.ts around lines 228 to
240, the formatDuration function assumes start and end are Date objects and will
throw if passed strings; coerce inputs to Date instances (e.g., if typeof start
=== 'string' or typeof end === 'string', wrap with new Date(...)), validate with
isNaN(date.getTime()) and return a clear fallback like '<invalid date>' if
either is invalid, and ensure you compute the difference only after
coercion/validation so string timestamps won't cause runtime errors.

Comment on lines +18 to +35
const statusConfig = {
done: {
color: chalk.green,
icon: String.fromCharCode(8730),
tableIcon: String.fromCharCode(8730)
}, // √
pending: { color: chalk.yellow, icon: 'o', tableIcon: 'o' },
'in-progress': {
color: chalk.hex('#FFA500'),
icon: String.fromCharCode(9654),
tableIcon: '>'
}, // ▶
deferred: { color: chalk.gray, icon: 'x', tableIcon: 'x' },
blocked: { color: chalk.red, icon: '!', tableIcon: '!' },
review: { color: chalk.magenta, icon: '?', tableIcon: '?' },
cancelled: { color: chalk.gray, icon: 'X', tableIcon: 'X' }
};

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Status icons/text diverge from existing scripts and miss completed.

Use the checkmark ✓ (U+2713) instead of √ and include completed mapping for back-compat with scripts/modules/ui.js.

-    done: {
-      color: chalk.green,
-      icon: String.fromCharCode(8730),
-      tableIcon: String.fromCharCode(8730)
-    }, // √
+    done: { color: chalk.green, icon: '✓', tableIcon: '✓' },
+    completed: { color: chalk.green, icon: '✓', tableIcon: '✓' },
@@
-  const simpleIcons = {
-    done: String.fromCharCode(8730), // √
+  const simpleIcons = {
+    done: '✓',
+    completed: '✓',

Also applies to: 42-55

🤖 Prompt for AI Agents
In apps/cli/src/utils/ui.ts around lines 18-35 (and also apply same change to
42-55), the status mapping uses the √ character (U+221A) for "done" and lacks a
"completed" key required for backward compatibility; update the "done" entry to
use the checkmark ✓ (U+2713) for both icon and tableIcon, and add a new
"completed" mapping that mirrors the "done" mapping (same color, icon, and
tableIcon) so existing scripts/modules/ui.js continue to work.

Comment on lines +313 to +318
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Tag ${tag} not found - file doesn't exist`);
}
throw error;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix error handling pattern in catch block.

Same error type issue.

Apply this diff:

-		} catch (error: any) {
-			if (error.code === 'ENOENT') {
+		} catch (error) {
+			if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
 				throw new Error(`Tag ${tag} not found - file doesn't exist`);
 			}
 			throw error;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Tag ${tag} not found - file doesn't exist`);
}
throw error;
}
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
throw new Error(`Tag ${tag} not found - file doesn't exist`);
}
throw error;
}
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/file-storage.ts around lines 313 to
318, the catch block replaces the original ENOENT error with a plain Error which
loses the original error type and properties; instead preserve the original
error object (and its code) while updating its message and rethrowing it — e.g.,
for ENOENT assign the new descriptive message onto the caught error (or
Object.assign a copy) and then throw the original/augmented error so the
error.type/code remains intact.

Comment on lines +361 to +366
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Tag ${oldTag} not found - file doesn't exist`);
}
throw error;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix error handling pattern.

Same error type issue.

Apply this diff:

-		} catch (error: any) {
-			if (error.code === 'ENOENT') {
+		} catch (error) {
+			if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
 				throw new Error(`Tag ${oldTag} not found - file doesn't exist`);
 			}
 			throw error;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error: any) {
if (error.code === 'ENOENT') {
throw new Error(`Tag ${oldTag} not found - file doesn't exist`);
}
throw error;
}
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
throw new Error(`Tag ${oldTag} not found - file doesn't exist`);
}
throw error;
}
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/file-storage.ts around lines
361-366, the catch currently throws a new Error for ENOENT which loses the
original error type and code; instead, update the handler to preserve the
original error: if error && (error as any).code === 'ENOENT', set (error as
Error).message = `Tag ${oldTag} not found - file doesn't exist` and rethrow the
original error; if the caught value isn't an Error object, create a new Error,
set its code property to 'ENOENT' and throw that.

Comment on lines +85 to +93
} catch (error: any) {
if (error.code === 'ENOENT') {
// Workflows file doesn't exist, start with empty state
console.debug('No workflows.json found, starting with empty state');
return;
}

console.warn('Failed to load workflow state:', error.message);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix error handling: use proper type for file system errors.

Similar to other files, the error handling pattern uses error.code which doesn't exist on the base Error type.

Apply this diff:

-    } catch (error: any) {
-      if (error.code === 'ENOENT') {
+    } catch (error) {
+      if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
         // Workflows file doesn't exist, start with empty state
         console.debug('No workflows.json found, starting with empty state');
         return;
       }
 
-      console.warn('Failed to load workflow state:', error.message);
+      console.warn('Failed to load workflow state:', (error as Error).message);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error: any) {
if (error.code === 'ENOENT') {
// Workflows file doesn't exist, start with empty state
console.debug('No workflows.json found, starting with empty state');
return;
}
console.warn('Failed to load workflow state:', error.message);
}
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
// Workflows file doesn't exist, start with empty state
console.debug('No workflows.json found, starting with empty state');
return;
}
console.warn('Failed to load workflow state:', (error as Error).message);
}

Comment on lines +279 to +284
await fs.unlink(this.stateFilePath);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw error;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix error handling pattern in clearState.

Same error type issue.

Apply this diff:

-    } catch (error: any) {
-      if (error.code !== 'ENOENT') {
+    } catch (error) {
+      if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
         throw error;
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await fs.unlink(this.stateFilePath);
} catch (error: any) {
if (error.code !== 'ENOENT') {
throw error;
}
}
await fs.unlink(this.stateFilePath);
} catch (error) {
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
throw error;
}
}
🤖 Prompt for AI Agents
In packages/workflow-engine/src/state/workflow-state-manager.ts around lines 279
to 284, the catch block uses a loosely typed error and accesses error.code
directly; change the handling to narrow the error type and safely check the
errno code before swallowing it. Replace the current catch with one that casts
error to NodeJS.ErrnoException (or checks for a code property via (error as
NodeJS.ErrnoException)?.code) and only suppress when code === 'ENOENT';
otherwise rethrow the original error.

}

// Optional: Could extend to maintain event log file
if (process.env.TASKMASTER_DEBUG) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use proper environment variable check.

Checking environment variables directly can be inconsistent. Consider checking for truthy values.

-    if (process.env.TASKMASTER_DEBUG) {
+    if (process.env.TASKMASTER_DEBUG === 'true' || process.env.TASKMASTER_DEBUG === '1') {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (process.env.TASKMASTER_DEBUG) {
if (process.env.TASKMASTER_DEBUG === 'true' || process.env.TASKMASTER_DEBUG === '1') {
🤖 Prompt for AI Agents
In packages/workflow-engine/src/state/workflow-state-manager.ts around line 301,
the code checks process.env.TASKMASTER_DEBUG directly which can be inconsistent;
change the condition to reliably detect truthy values (e.g., normalize and
compare to 'true' or coerce with a boolean) such as using
!!process.env.TASKMASTER_DEBUG or process.env.TASKMASTER_DEBUG?.toLowerCase()
=== 'true' so the debug flag behaves consistently across environments.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

await this.showWorkflowStatus(workflowId, options);
}

} catch (error: any) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect TypeScript typing.

Using any in the catch block bypasses TypeScript's type checking. Use unknown instead for better type safety.

Apply this diff:

-    } catch (error: any) {
+    } catch (error) {
       ui.displayError(error.message || 'Failed to get workflow status');

Or if you need to access the message property:

-    } catch (error: any) {
+    } catch (error) {
+      const message = error instanceof Error ? error.message : 'Failed to get workflow status';
+      ui.displayError(message);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error: any) {
} catch (error) {
const message = error instanceof Error ? error.message : 'Failed to get workflow status';
ui.displayError(message);
}
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-status.command.ts around line 55, the
catch block types the error as `any`; change it to `unknown` to preserve type
safety and avoid bypassing the checker. Update the catch signature to `catch
(error: unknown)` and, if you need to read `error.message`, narrow the type
first (e.g., `if (error instanceof Error) { /* use error.message */ }` or
extract a safe message via `const msg = error && typeof (error as any).message
=== 'string' ? (error as any).message : String(error);`). Ensure subsequent
error handling uses the narrowed type or the safe message variable.

Comment on lines +149 to +154
// Handle Ctrl+C
process.on('SIGINT', () => {
clearInterval(interval);
console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
process.exit(0);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Process event handler may cause memory leak.

The SIGINT handler is added every time watchWorkflowStatus is called but never removed, which could lead to memory leaks if the method is called multiple times.

Store and clean up the event handler properly:

+  // Store handler reference for cleanup
+  const sigintHandler = () => {
+    clearInterval(interval);
+    console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
+    process.exit(0);
+  };
+
   // Handle Ctrl+C
-  process.on('SIGINT', () => {
-    clearInterval(interval);
-    console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
-    process.exit(0);
-  });
+  process.once('SIGINT', sigintHandler);
+
+  // Clean up handler when watch ends naturally
+  // (add this in the workflow end check logic)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Handle Ctrl+C
process.on('SIGINT', () => {
clearInterval(interval);
console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
process.exit(0);
});
// Store handler reference for cleanup
const sigintHandler = () => {
clearInterval(interval);
console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
process.exit(0);
};
// Handle Ctrl+C
process.once('SIGINT', sigintHandler);
// Clean up handler when watch ends naturally
// (add this in the workflow end check logic)
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-status.command.ts around lines 149 to
154, the SIGINT handler is registered on every call which can leak memory; make
the handler a named function, register it once per invocation, and remove it in
the cleanup path: create a const onSigint = () => { clearInterval(interval);
console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
process.exit(0); }, register with process.on('SIGINT', onSigint) and ensure you
call process.off('SIGINT', onSigint) (and clearInterval) when stopping the
watcher or before re-registering to avoid duplicate listeners; alternatively use
process.once('SIGINT', onSigint) if one-time handling is acceptable.

});

// Keep the process alive
await new Promise(() => {});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Replace infinite promise with proper event handling.

Using an infinite promise to keep the process alive is an anti-pattern that makes the code harder to test and reason about.

Replace with a proper event-based approach:

-    // Keep the process alive
-    await new Promise(() => {});
+    // Return a promise that resolves when watching ends
+    return new Promise<void>((resolve) => {
+      // Resolve when workflow ends
+      const checkEnd = () => {
+        this.workflowManager!.getWorkflowStatus(workflowId);
+        // ... existing workflow status check logic
+        if (['completed', 'failed', 'cancelled', 'timeout'].includes(workflow.status)) {
+          clearInterval(interval);
+          resolve();
+        }
+      };
+      
+      // Also resolve on SIGINT
+      process.once('SIGINT', () => {
+        clearInterval(interval);
+        console.log(chalk.yellow('\n👋 Stopped watching workflow status'));
+        resolve();
+      });
+    });

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-status.command.ts around line 157,
remove the infinite await new Promise(() => {}) and replace it with proper
event-driven shutdown handling: create a promise that resolves when the process
should exit (e.g., on 'SIGINT' and 'SIGTERM' signals or an AbortSignal/event
from the workflow service) and await that instead, wiring up listeners that
cleanly close any open resources and remove listeners after resolution; ensure
the promise rejects/handles errors from the workflow client and that you call
process.exit only after cleanup if needed.

pending: { icon: '⏳', color: chalk.yellow },
initializing: { icon: '🔄', color: chalk.blue },
running: { icon: '🚀', color: chalk.green },
paused: { icon: '⏸️', color: chalk.orange },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Typo in color name 'orange' - should use 'yellow'.

The chalk library doesn't have an 'orange' color. This will cause a runtime error.

Apply this diff to fix the color name:

-      paused: { icon: '⏸️', color: chalk.orange },
+      paused: { icon: '⏸️', color: chalk.yellow },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
paused: { icon: '⏸️', color: chalk.orange },
paused: { icon: '⏸️', color: chalk.yellow },
🤖 Prompt for AI Agents
In apps/cli/src/commands/workflow/workflow-status.command.ts around line 298 the
code uses chalk.orange which is invalid and causes a runtime error; change the
color reference to chalk.yellow (replace chalk.orange with chalk.yellow) so the
paused state uses a supported chalk color.

Comment on lines +111 to +128
private setNestedProperty(
obj: any,
path: readonly string[],
value: any
): void {
const lastKey = path[path.length - 1];
const keys = path.slice(0, -1);

let current = obj;
for (const key of keys) {
if (!current[key]) {
current[key] = {};
}
current = current[key];
}

current[lastKey] = value;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Type safety concern with 'any' type

The setNestedProperty method uses any type which bypasses TypeScript's type checking. This could lead to runtime errors if the wrong types are set.

Consider using a more type-safe approach:

 private setNestedProperty(
-  obj: any,
+  obj: Record<string, any>,
   path: readonly string[],
-  value: any
+  value: unknown
 ): void {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private setNestedProperty(
obj: any,
path: readonly string[],
value: any
): void {
const lastKey = path[path.length - 1];
const keys = path.slice(0, -1);
let current = obj;
for (const key of keys) {
if (!current[key]) {
current[key] = {};
}
current = current[key];
}
current[lastKey] = value;
}
private setNestedProperty(
obj: Record<string, any>,
path: readonly string[],
value: unknown
): void {
const lastKey = path[path.length - 1];
const keys = path.slice(0, -1);
let current = obj;
for (const key of keys) {
if (!current[key]) {
current[key] = {};
}
current = current[key];
}
current[lastKey] = value;
}
🤖 Prompt for AI Agents
In packages/tm-core/src/config/services/environment-config-provider.service.ts
around lines 111 to 128, the setNestedProperty method currently uses the unsafe
any type for obj and value; change the signature to be generic and type-safe
(e.g. <T, V> setNestedProperty(obj: T, path: readonly string[], value: V):
asserts obj is T & Record<string, any> or return T) and replace uses of any with
Record<string, unknown> or unknown, add narrow type guards when creating nested
objects (cast to Record<string, unknown> only after checking typeof === "object"
&& obj !== null), and ensure the final assignment casts appropriately (or
returns a new typed object) so TypeScript can enforce structure while preserving
runtime behavior.

Comment on lines +287 to +305
childProcess.on('exit', (code, signal) => {
process.status = code === 0 ? 'stopped' : 'crashed';

this.emitEvent('process.stopped', workflowId, taskId, {
pid: process.pid,
exitCode: code,
signal
});

// Cleanup
this.activeProcesses.delete(workflowId);
this.childProcesses.delete(workflowId);

const timeout = this.timeouts.get(workflowId);
if (timeout) {
clearTimeout(timeout);
this.timeouts.delete(workflowId);
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Potential memory leak in exit handler

The exit handler cleans up maps but doesn't remove the event listeners attached to stdout/stderr streams.

Consider removing all listeners when the process exits:

 // Handle process exit
 childProcess.on('exit', (code, signal) => {
   process.status = code === 0 ? 'stopped' : 'crashed';
   
   this.emitEvent('process.stopped', workflowId, taskId, {
     pid: process.pid,
     exitCode: code,
     signal
   });
   
+  // Remove all listeners to prevent memory leaks
+  childProcess.stdout?.removeAllListeners();
+  childProcess.stderr?.removeAllListeners();
+  childProcess.removeAllListeners();
+  
   // Cleanup
   this.activeProcesses.delete(workflowId);
   this.childProcesses.delete(workflowId);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
childProcess.on('exit', (code, signal) => {
process.status = code === 0 ? 'stopped' : 'crashed';
this.emitEvent('process.stopped', workflowId, taskId, {
pid: process.pid,
exitCode: code,
signal
});
// Cleanup
this.activeProcesses.delete(workflowId);
this.childProcesses.delete(workflowId);
const timeout = this.timeouts.get(workflowId);
if (timeout) {
clearTimeout(timeout);
this.timeouts.delete(workflowId);
}
});
childProcess.on('exit', (code, signal) => {
process.status = code === 0 ? 'stopped' : 'crashed';
this.emitEvent('process.stopped', workflowId, taskId, {
pid: process.pid,
exitCode: code,
signal
});
// Remove all listeners to prevent memory leaks
childProcess.stdout?.removeAllListeners();
childProcess.stderr?.removeAllListeners();
childProcess.removeAllListeners();
// Cleanup
this.activeProcesses.delete(workflowId);
this.childProcesses.delete(workflowId);
const timeout = this.timeouts.get(workflowId);
if (timeout) {
clearTimeout(timeout);
this.timeouts.delete(workflowId);
}
});
🤖 Prompt for AI Agents
In packages/workflow-engine/src/process/process-sandbox.ts around lines 287 to
305, the exit handler cleans up maps but leaves stdout/stderr and other event
listeners attached to the child process, risking a memory leak; update the
handler to remove listeners by checking for childProcess.stdout and
childProcess.stderr and calling removeAllListeners() (or off() for specific
handlers) on them, and also call childProcess.removeAllListeners() to clear any
remaining process-level listeners before deleting maps/timeouts so all
references are released.

Comment on lines +368 to +377
private setupCleanupHandlers(): void {
const cleanup = () => {
console.log('Cleaning up processes...');
this.cleanupAll(true).catch(console.error);
};

process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
process.on('exit', cleanup);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Multiple cleanup handler registrations could cause issues

The cleanup handlers are registered in the constructor, which means if multiple ProcessSandbox instances are created, multiple handlers will be registered on the process object.

Use a singleton pattern or track registration state:

 export class ProcessSandbox extends EventEmitter {
   private config: ProcessSandboxConfig;
   private activeProcesses = new Map<string, WorkflowProcess>();
   private childProcesses = new Map<string, ChildProcess>();
   private timeouts = new Map<string, NodeJS.Timeout>();
+  private static cleanupHandlersRegistered = false;
   
   private setupCleanupHandlers(): void {
+    // Only register once globally
+    if (ProcessSandbox.cleanupHandlersRegistered) {
+      return;
+    }
+    ProcessSandbox.cleanupHandlersRegistered = true;
+    
     const cleanup = () => {
       console.log('Cleaning up processes...');
       this.cleanupAll(true).catch(console.error);
     };
🤖 Prompt for AI Agents
packages/workflow-engine/src/process/process-sandbox.ts around lines 368 to 377:
the current setupCleanupHandlers registers process-level SIGINT/SIGTERM/exit
handlers per instance which leads to duplicate handlers if multiple
ProcessSandbox instances are created; change to register handlers only once by
adding a static/class-level boolean flag (e.g., handlersRegistered) checked
before registering, set it to true after registration, and ensure the cleanup
callback can call instance cleanupAll for all sandboxes (either by delegating to
a static manager that tracks instances or making cleanup call a static
cleanupAll that iterates instances); alternatively, remove listeners when an
instance is disposed by calling process.off for the bound handlers to avoid
accumulation.

*/

import { EventEmitter } from 'node:events';
import path from 'node:path';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove unused imports; align with repo import style.

path and WorkflowTimeoutError are unused. Also, team prefers avoiding node: scheme imports.

-import path from 'node:path';
@@
-  MaxConcurrentWorkflowsError,
-  WorkflowTimeoutError
+  MaxConcurrentWorkflowsError

Optionally (style):

-import { EventEmitter } from 'node:events';
+import { EventEmitter } from 'events';

Also applies to: 31-32

🤖 Prompt for AI Agents
In packages/workflow-engine/src/task-execution/task-execution-manager.ts around
lines 7 and also lines 31-32, remove the unused imports: drop the `path` import
and the `WorkflowTimeoutError` import, and change any remaining `node:`-prefixed
imports to the repo preferred non-`node:` style (e.g., use 'path' instead of
'node:path' where a path import is actually needed elsewhere); ensure import
ordering and linting rules remain satisfied after removal.

Comment on lines +44 to +49
private config: TaskExecutionManagerConfig;
private worktreeManager: WorktreeManager;
private processSandbox: ProcessSandbox;
private stateManager: WorkflowStateManager;
private initialized = false;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against concurrent starts for the same task (race condition).

Two overlapping startTaskExecution calls for the same task can pass hasActiveWorkflow() and both proceed, creating duplicate workflows/worktrees. Add a per-task in-flight guard.

 export class TaskExecutionManager extends EventEmitter {
   private config: TaskExecutionManagerConfig;
   private worktreeManager: WorktreeManager;
   private processSandbox: ProcessSandbox;
   private stateManager: WorkflowStateManager;
   private initialized = false;
+  private startingTasks = new Set<string>();
@@
-    try {
+    if (this.startingTasks.has(task.id)) {
+      throw new WorkflowError(
+        `Task ${task.id} already has an active workflow`,
+        'TASK_ALREADY_EXECUTING',
+        undefined,
+        task.id
+      );
+    }
+    this.startingTasks.add(task.id);
+    try {
       // Create worktree
@@
-    } catch (error) {
+    } catch (error) {
       throw new WorkflowError(
         `Failed to start task execution for ${task.id}`,
         'TASK_EXECUTION_START_ERROR',
         undefined,
         task.id,
         error as Error
       );
-    }
+    } finally {
+      this.startingTasks.delete(task.id);
+    }

Also applies to: 117-199

🤖 Prompt for AI Agents
In packages/workflow-engine/src/task-execution/task-execution-manager.ts around
lines 44-49 (and similarly for the logic between lines 117-199), there is a race
where two concurrent startTaskExecution calls can both observe no active
workflow and create duplicate workflows/worktrees; add a per-task in-flight
guard (e.g., a Map<string, Promise|boolean>) keyed by taskId that is checked at
the very start of startTaskExecution and immediately set when beginning entry
into the start path; if an entry exists, short-circuit (throw or return a
controlled error/result); ensure the guard is cleared in all completion and
error paths (finally) so subsequent starts can proceed; apply this same guard
check to the other start-related flow in lines 117-199 to prevent duplicate
concurrent starts for the same task.

Comment on lines +387 to +411
private async cleanupStaleWorkflows(): Promise<void> {
const workflows = this.stateManager.listWorkflows();

for (const workflow of workflows) {
const isStale =
workflow.status === 'running' &&
!this.processSandbox.isProcessRunning(`workflow-${workflow.taskId}`);

if (isStale) {
console.log(`Cleaning up stale workflow for task ${workflow.taskId}`);

try {
await this.stateManager.updateWorkflowStatus(
`workflow-${workflow.taskId}`,
'failed'
);

// Try to clean up worktree
await this.worktreeManager.removeWorktree(workflow.taskId, true);
} catch (error) {
console.error(`Failed to cleanup stale workflow:`, error);
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Stale-workflow cleanup uses the wrong identifier (guaranteed miss).

cleanupStaleWorkflows() checks and updates using workflow-${workflow.taskId}, but processes and state are keyed by the real workflowId returned at registration (e.g., workflow-<taskId>-<timestamp>-<rand>). This will never match, so stale workflows aren’t cleaned.

Apply this fix (requires exposing IDs from state):

-    const workflows = this.stateManager.listWorkflows();
-
-    for (const workflow of workflows) {
-      const isStale =
-        workflow.status === 'running' &&
-        !this.processSandbox.isProcessRunning(`workflow-${workflow.taskId}`);
-
-      if (isStale) {
-        console.log(`Cleaning up stale workflow for task ${workflow.taskId}`);
-
-        try {
-          await this.stateManager.updateWorkflowStatus(
-            `workflow-${workflow.taskId}`,
-            'failed'
-          );
-
-          // Try to clean up worktree
-          await this.worktreeManager.removeWorktree(workflow.taskId, true);
-        } catch (error) {
-          console.error(`Failed to cleanup stale workflow:`, error);
-        }
-      }
-    }
+    const entries = this.stateManager.listWorkflowsWithIds();
+    for (const { id, context } of entries) {
+      const isStale =
+        context.status === 'running' &&
+        !this.processSandbox.isProcessRunning(id);
+      if (isStale) {
+        console.log(`Cleaning up stale workflow for task ${context.taskId}`);
+        try {
+          await this.stateManager.updateWorkflowStatus(id, 'failed');
+          await this.worktreeManager.removeWorktree(context.taskId, true);
+        } catch (error) {
+          console.error(`Failed to cleanup stale workflow:`, error);
+        }
+      }
+    }

Add this helper to WorkflowStateManager (outside this file) to expose IDs:

// packages/workflow-engine/src/state/workflow-state-manager.ts
listWorkflowsWithIds(): Array<{ id: string; context: WorkflowExecutionContext }> {
  return Array.from(this.activeWorkflows.entries()).map(([id, context]) => ({ id, context }));
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +109 to +129
export interface StorageSettings
extends Omit<RuntimeStorageConfig, 'apiConfigured'> {
/** Base path for file storage */
basePath?: string;
/**
* Indicates whether API is configured
* @computed Derived automatically from presence of apiEndpoint or apiAccessToken
* @internal Should not be set manually in user config - computed by ConfigManager
*/
readonly apiConfigured?: boolean;
/** Enable automatic backups */
enableBackup: boolean;
/** Maximum number of backups to retain */
maxBackups: number;
/** Enable compression for storage */
enableCompression: boolean;
/** File encoding for text files */
encoding: BufferEncoding;
/** Enable atomic file operations */
atomicOperations: boolean;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Unify naming and remove duplicate field.

  • Rename atomicOperationsatomicWrites to match StorageConfig.
  • basePath is already included via extends Omit<RuntimeStorageConfig, 'apiConfigured'>; redeclaration is redundant.

Apply this diff:

 export interface StorageSettings
   extends Omit<RuntimeStorageConfig, 'apiConfigured'> {
-  /** Base path for file storage */
-  basePath?: string;
   /**
    * Indicates whether API is configured
    * @computed Derived automatically from presence of apiEndpoint or apiAccessToken
    * @internal Should not be set manually in user config - computed by ConfigManager
    */
   readonly apiConfigured?: boolean;
   /** Enable automatic backups */
   enableBackup: boolean;
   /** Maximum number of backups to retain */
   maxBackups: number;
   /** Enable compression for storage */
   enableCompression: boolean;
   /** File encoding for text files */
   encoding: BufferEncoding;
-  /** Enable atomic file operations */
-  atomicOperations: boolean;
+  /** Enable atomic file writes */
+  atomicWrites: boolean;
 }

🏁 Script executed:

#!/bin/bash
# Find all references to the old name to update
rg -n "atomicOperations" -C2 --type ts

Length of output: 1379


🏁 Script executed:

#!/bin/bash
set -eo pipefail

echo "Searching for RuntimeStorageConfig occurrences..."
rg -n -S --type ts "RuntimeStorageConfig" -g '!**/node_modules/**' -C3 || true

echo
echo "Searching specifically for interface/type declaration of RuntimeStorageConfig..."
rg -n -S --type ts "interface RuntimeStorageConfig\b|type RuntimeStorageConfig\b" -g '!**/node_modules/**' -C3 || true

FILES=$(rg -n --type ts "interface RuntimeStorageConfig\b|type RuntimeStorageConfig\b" -g '!**/node_modules/**' -l || true)
if [ -n "$FILES" ]; then
  for f in $FILES; do
    echo
    echo "----- Showing $f (first 300 lines) -----"
    sed -n '1,300p' "$f" || true
  done
fi

echo
echo "Searching for basePath occurrences..."
rg -n -S --type ts "\bbasePath\b" -g '!**/node_modules/**' -C3 || true

echo
echo "Searching for atomicWrites occurrences..."
rg -n -S --type ts "\batomicWrites\b" -g '!**/node_modules/**' -C3 || true

Length of output: 958


🏁 Script executed:

#!/bin/bash
set -eo pipefail

echo "pwd: $(pwd)"
echo "Top-level repo entries:"
ls -la | sed -n '1,200p'

echo
echo "Search for any declaration or usage of 'RuntimeStorageConfig' (no type filters):"
rg -n "RuntimeStorageConfig" -S -C3 || true

echo
echo "Search for exact interface/type declaration of RuntimeStorageConfig:"
rg -n --multiline "interface\s+RuntimeStorageConfig\b|type\s+RuntimeStorageConfig\b" -S -C3 || true

echo
echo "Print packages/tm-core/src/interfaces/configuration.interface.ts to confirm StorageSettings content:"
sed -n '1,260p' packages/tm-core/src/interfaces/configuration.interface.ts || true

echo
echo "Print config loader files that referenced atomicOperations earlier:"
sed -n '1,260p' packages/tm-core/src/config/services/config-loader.service.ts || true
echo "---- spec ----"
sed -n '1,260p' packages/tm-core/src/config/services/config-loader.service.spec.ts || true

echo
echo "Search for 'basePath' occurrences repository-wide:"
rg -n "\bbasePath\b" -S -C3 || true

echo
echo "Search for 'atomicWrites' occurrences repository-wide:"
rg -n "\batomicWrites\b" -S -C3 || true

Length of output: 45441


Rename atomicOperations → atomicWrites and remove duplicate basePath

RuntimeStorageConfig already defines basePath and the storage interface uses atomicWrites — update StorageSettings and all usages to match.

  • Update packages/tm-core/src/interfaces/configuration.interface.ts: remove the duplicated basePath declaration from StorageSettings and rename atomicOperations → atomicWrites (align optionality with storage.interface.ts).
  • Update packages/tm-core/src/config/services/config-loader.service.ts: replace atomicOperations → atomicWrites in getDefaultConfig.
  • Update packages/tm-core/src/config/services/config-loader.service.spec.ts: update test expectations to atomicWrites.

Apply this patch to configuration.interface.ts:

 export interface StorageSettings
   extends Omit<RuntimeStorageConfig, 'apiConfigured'> {
-	/** Base path for file storage */
-	basePath?: string;
 	/**
 	 * Indicates whether API is configured
 	 * @computed Derived automatically from presence of apiEndpoint or apiAccessToken
 	 * @internal Should not be set manually in user config - computed by ConfigManager
 	 */
 	readonly apiConfigured?: boolean;
 	/** Enable automatic backups */
 	enableBackup: boolean;
 	/** Maximum number of backups to retain */
 	maxBackups: number;
 	/** Enable compression for storage */
 	enableCompression: boolean;
 	/** File encoding for text files */
 	encoding: BufferEncoding;
-	/** Enable atomic file operations */
-	atomicOperations: boolean;
+	/** Enable atomic file writes */
+	atomicWrites?: boolean;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export interface StorageSettings
extends Omit<RuntimeStorageConfig, 'apiConfigured'> {
/** Base path for file storage */
basePath?: string;
/**
* Indicates whether API is configured
* @computed Derived automatically from presence of apiEndpoint or apiAccessToken
* @internal Should not be set manually in user config - computed by ConfigManager
*/
readonly apiConfigured?: boolean;
/** Enable automatic backups */
enableBackup: boolean;
/** Maximum number of backups to retain */
maxBackups: number;
/** Enable compression for storage */
enableCompression: boolean;
/** File encoding for text files */
encoding: BufferEncoding;
/** Enable atomic file operations */
atomicOperations: boolean;
}
export interface StorageSettings
extends Omit<RuntimeStorageConfig, 'apiConfigured'> {
/**
* Indicates whether API is configured
* @computed Derived automatically from presence of apiEndpoint or apiAccessToken
* @internal Should not be set manually in user config - computed by ConfigManager
*/
readonly apiConfigured?: boolean;
/** Enable automatic backups */
enableBackup: boolean;
/** Maximum number of backups to retain */
maxBackups: number;
/** Enable compression for storage */
enableCompression: boolean;
/** File encoding for text files */
encoding: BufferEncoding;
/** Enable atomic file writes */
atomicWrites?: boolean;
}
🤖 Prompt for AI Agents
In packages/tm-core/src/interfaces/configuration.interface.ts around lines 109
to 129, remove the duplicate basePath property (since RuntimeStorageConfig
already defines it) and rename atomicOperations to atomicWrites, matching the
name and optionality/type used in storage.interface.ts; then update all usages
accordingly — specifically, in
packages/tm-core/src/config/services/config-loader.service.ts replace
atomicOperations → atomicWrites in getDefaultConfig, and update
packages/tm-core/src/config/services/config-loader.service.spec.ts to expect
atomicWrites instead of atomicOperations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant