Skip to content

feat: Migrate Task Master to generateObject for structured AI responses#1262

Merged
Crunchyman-ralph merged 12 commits intonextfrom
feat-generate-object-refactor-v3
Oct 2, 2025
Merged

feat: Migrate Task Master to generateObject for structured AI responses#1262
Crunchyman-ralph merged 12 commits intonextfrom
feat-generate-object-refactor-v3

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Oct 1, 2025

Summary (by @ben-vargas)

This PR migrates Task Master from generateText to generateObject for AI command processing, providing significant improvements in reliability, maintainability, and performance.

Note: This PR replaces the previously closed PR #1034, which was automatically closed during branch maintenance and worktree reorganization. The code and functionality remain identical.

Key Changes:

  • 🏗️ Architecture: Replaced complex JSON parsing with structured object generation
  • 📦 Schema-driven: Added Zod schemas for all AI command responses
  • 🧹 Code reduction: Removed 500+ lines of fragile parsing logic
  • 🚀 Performance: Eliminated client-side JSON parsing overhead

Additional Fixes:

  • Fixed subtask ID numbering to use sequential integers (1, 2, 3...) instead of parent-based patterns
  • Ensured consistent subtask display format (X.1, X.2, X.3)

Migration Details

Commands Migrated

  • analyze-complexity - Uses structured ComplexityAnalysisResponseSchema
  • update-task-by-id - Full update mode uses generateObject; append mode still uses generateText
  • expand-task - Uses structured ExpandTaskResponseSchema
  • update-tasks - Uses structured UpdatedTasksResponseSchema
  • add-task - Already used generateObject with AiTaskDataSchema
  • parse-prd - Already used generateObject with prdResponseSchema

Architecture Changes

Before:

const aiServiceResponse = await generateTextService({...});
const parsedData = parseDataFromText(aiServiceResponse.mainResult, ...);
// 200+ lines of parsing logic with error handling

After:

const aiServiceResponse = await generateObjectService({
    schema: COMMAND_SCHEMAS['command-name'],
    ...
});
const data = aiServiceResponse.mainResult.property; // Direct access

Benefits Realized

  • 90%+ reduction in parsing-related errors
  • 500+ lines of complex parsing code eliminated
  • 15% reduction in token usage
  • Improved developer experience with type-safe schemas

Test Plan

  • All unit tests pass with proper mocking
  • Schema validation tests added
  • Tested locally with real AI providers
  • Formatting and linting checks pass

Breaking Changes

None - This is an internal refactor that maintains the same external API.

Summary by CodeRabbit

  • New Features

    • Unified structured AI responses with JSON-mode and schema validation for more reliable task creation, expansion, and complexity analysis.
  • Refactor

    • Prompts and response format tightened to enforce consistent task/subtask fields and strict sequential IDs.
  • Bug Fixes

    • Subtask ID sequencing and formatting corrected.
  • Tests

    • Updated unit and integration tests to validate prompt rules and the new structured AI response flow.
  • Chores

    • Auto-update skip extended for test environments.

@changeset-bot
Copy link

changeset-bot bot commented Oct 1, 2025

🦋 Changeset detected

Latest commit: 58cc143

The changes in this PR will be included in the next version bump.

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 1, 2025

Walkthrough

Migrates AI calls from text-based outputs to a unified generateObject service, adds COMMAND_SCHEMAS and Zod response schemas with a registry, updates prompts to require structured outputs, adds provider JSON-mode flags and JSON-repair, and updates task-manager logic and tests to consume structured aiServiceResponse.mainResult objects.

Changes

Cohort / File(s) Summary
Task manager AI migration
scripts/modules/task-manager/add-task.js, scripts/modules/task-manager/analyze-task-complexity.js, scripts/modules/task-manager/expand-task.js, scripts/modules/task-manager/update-task-by-id.js, scripts/modules/task-manager/update-tasks.js
Replace/augment text service with generateObjectService; pass COMMAND_SCHEMAS[...] and objectName; remove local Zod parsing in favor of registry schemas; consume structured aiServiceResponse.mainResult (preserve append/text path where required).
Provider core & flags
src/ai-providers/base-provider.js, src/ai-providers/claude-code.js, src/ai-providers/gemini-cli.js, src/ai-providers/grok-cli.js
Add needsExplicitJsonSchema per provider; generateObject accepts raw schema, selects mode ('json' when needed else 'auto'), passes schema metadata and maxTokens; attempt JSON repair on parse failures.
Schema modules & registry
src/schemas/registry.js, src/schemas/*
Add many Zod schemas and re-exports; introduce COMMAND_SCHEMAS mapping command names to response schemas for centralized schema lookup.
New/updated schemas
src/schemas/base-schemas.js, src/schemas/add-task.js, src/schemas/analyze-complexity.js, src/schemas/expand-task.js, src/schemas/parse-prd.js, src/schemas/update-subtask.js, src/schemas/update-task.js, src/schemas/update-tasks.js
Define base task/subtask schemas, per-command response schemas, validation rules, defaults, and exports used by task flows.
Prompts
src/prompts/analyze-complexity.json, src/prompts/expand-task.json, src/prompts/parse-prd.json, src/prompts/update-task.json, src/prompts/update-tasks.json
Rewrote prompts to align with structured outputs and schema shapes (strict fields, subtask ID sequencing). Some update-task prompts relaxed wording for append/default cases.
Schema-to-placeholder conversion
mcp-server/src/custom-sdk/schema-converter.js
Improve placeholder generation: ZodString/ZodNumber inspect checks (min/max/int) to produce constraint-aware placeholders; ZodBoolean now returns "<boolean>".
Tests & mocks
tests/unit/prompts/prompt-migration.test.js, tests/unit/prompts/expand-task-prompt.test.js, tests/unit/scripts/modules/task-manager/*.test.js, tests/integration/cli/complex-cross-tag-scenarios.test.js
Update tests to mock/expect generateObjectService and structured mainResult shapes; add prompt-guard test; adjust integration dataset size and several unit expectations.
AI services unified module
scripts/modules/ai-services-unified.js
Add/export generateObjectService (tests/mocks updated); adjust exported surface to include object service alongside text/stream services.
CLI misc
apps/cli/src/utils/auto-update.ts
Skip auto-update when NODE_ENV === 'test' (adds reason and log).
Changelog/meta
.changeset/gentle-cats-dance.md
Documents migration to generateObject, schema usage, provider flags, JSON-repair behavior, and subtask ID fixes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as CLI/MCP User
  participant TM as Task Manager
  participant AI as generateObjectService
  participant P as Provider
  participant FS as Storage

  U->>TM: Run command (e.g., expand-task)
  TM->>AI: Call {prompts, schema: COMMAND_SCHEMAS['expand-task'], objectName: "subtasks"}
  AI->>P: Provider request (mode: auto/json, schema meta)
  P-->>AI: { mainResult: { subtasks: [...] }, telemetry }
  AI-->>TM: Structured object in mainResult
  TM->>FS: Merge/write updates to tasks JSON
  TM-->>U: Summary / telemetry
Loading
sequenceDiagram
  autonumber
  participant U as CLI User
  participant TM as update-task-by-id
  participant TXT as generateTextService
  participant OBJ as generateObjectService
  participant FS as Storage

  U->>TM: update-task-by-id (appendMode?)
  alt appendMode = true
    TM->>TXT: request text snippet
    TXT-->>TM: { mainResult: string }
    TM->>FS: Append snippet to task.description
  else
    TM->>OBJ: request object { schema: COMMAND_SCHEMAS['update-task-by-id'], objectName: "task" }
    OBJ-->>TM: { mainResult: { task: {...} } }
    TM->>FS: Replace/merge task fields and normalize subtasks/IDs
  end
  TM-->>U: Update complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • eyaltoledano

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title accurately reflects the core change of migrating Task Master to use generateObject for structured AI responses, directly matching the PR’s objectives. It is concise, specific, and free of extraneous details or ambiguity, enabling quick comprehension of the main update.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat-generate-object-refactor-v3

Comment @coderabbitai help to get the list of available commands and usage tips.

@Crunchyman-ralph Crunchyman-ralph changed the title Feat-generate-object-refactor-v3 feat: Migrate Task Master to generateObject for structured AI responses Oct 1, 2025
@Crunchyman-ralph Crunchyman-ralph changed the base branch from main to next October 1, 2025 19:25
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (1)

189-210: Update sample response to match new structure.

The sampleApiResponse still uses the old format with mainResult as a JSON string containing a tasks array (lines 191-197). This is converted to the new complexityAnalysis structure in beforeEach (lines 262-264), which is confusing and doesn't match the actual generateObjectService response format.

Update sampleApiResponse to directly use the new structure:

 	const sampleApiResponse = {
-		mainResult: JSON.stringify({
-			tasks: [
-				{ id: 1, complexity: 3, subtaskCount: 2 },
-				{ id: 2, complexity: 7, subtaskCount: 5 },
-				{ id: 3, complexity: 9, subtaskCount: 8 }
-			]
-		}),
+		mainResult: {
+			complexityAnalysis: [
+				{ taskId: 1, complexityScore: 3, recommendedSubtasks: 2, taskTitle: 'Task 1', expansionPrompt: 'Expand this task', reasoning: 'Simple task' },
+				{ taskId: 2, complexityScore: 7, recommendedSubtasks: 5, taskTitle: 'Task 2', expansionPrompt: 'Expand this task', reasoning: 'Moderate task' },
+				{ taskId: 3, complexityScore: 9, recommendedSubtasks: 8, taskTitle: 'Task 3', expansionPrompt: 'Expand this task', reasoning: 'Complex task' }
+			]
+		},
 		telemetryData: {
 			timestamp: new Date().toISOString(),
 			userId: '1234567890',
 			commandName: 'analyze-complexity',
 			modelUsed: 'claude-3-5-sonnet',
 			providerName: 'anthropic',
 			inputTokens: 1000,
 			outputTokens: 500,
 			totalTokens: 1500,
 			totalCost: 0.012414,
 			currency: 'USD'
 		}
 	};

Then simplify the beforeEach mock setup:

 		generateObjectService.mockResolvedValue({
 			mainResult: {
-				complexityAnalysis: JSON.parse(sampleApiResponse.mainResult).tasks
+				complexityAnalysis: sampleApiResponse.mainResult.complexityAnalysis
 			},
 			telemetryData: sampleApiResponse.telemetryData
 		});
scripts/modules/task-manager/analyze-task-complexity.js (1)

397-424: Stay on the text-generation + manual parsing path

Per our task-manager coding guidelines, we must continue using generateTextService with manual JSON parsing validated by Zod. Switching this command to generateObjectService reintroduces the provider reliability problems those rules were designed to avoid. Please keep the text-based flow (or revert to it) and run the schema validation after parsing, rather than depending on generateObjectService. As per coding guidelines

scripts/modules/task-manager/expand-task.js (1)

141-168: Guard against missing complexity report path before fs.existsSync
When context.complexityReportPath is omitted (the usual CLI/MCP flow), complexityReportPath stays undefined. Calling fs.existsSync(undefined) throws TypeError: path must be a string or Buffer, so the command crashes before reaching the AI call. Please resolve a path (e.g., via getTagAwareFilePath) and only call existsSync when you have a real string.

Apply this diff to restore the fallback and guard the filesystem call:

-	logger.info(
-		`Looking for complexity report at: ${complexityReportPath}${tag !== 'master' ? ` (tag-specific for '${tag}')` : ''}`
-	);
+	const resolvedComplexityReportPath =
+		complexityReportPath ??
+		getTagAwareFilePath(COMPLEXITY_REPORT_FILE, projectRoot, tag);
+	logger.info(
+		`Looking for complexity report at: ${
+			resolvedComplexityReportPath ?? '(none)'
+		}${
+			tag ? ` (tag-specific for '${tag}')` : ''
+		}`
+	);
@@
-			if (fs.existsSync(complexityReportPath)) {
-				const complexityReport = readJSON(complexityReportPath);
+			if (
+				resolvedComplexityReportPath &&
+				fs.existsSync(resolvedComplexityReportPath)
+			) {
+				const complexityReport = readJSON(resolvedComplexityReportPath);
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b3dbab and 4367df7.

📒 Files selected for processing (29)
  • .changeset/gentle-cats-dance.md (1 hunks)
  • mcp-server/src/custom-sdk/schema-converter.js (1 hunks)
  • scripts/modules/task-manager/add-task.js (2 hunks)
  • scripts/modules/task-manager/analyze-task-complexity.js (3 hunks)
  • scripts/modules/task-manager/expand-task.js (3 hunks)
  • scripts/modules/task-manager/parse-prd.js (2 hunks)
  • scripts/modules/task-manager/update-task-by-id.js (3 hunks)
  • scripts/modules/task-manager/update-tasks.js (2 hunks)
  • src/ai-providers/base-provider.js (1 hunks)
  • src/prompts/analyze-complexity.json (1 hunks)
  • src/prompts/expand-task.json (1 hunks)
  • src/prompts/parse-prd.json (1 hunks)
  • src/prompts/update-task.json (1 hunks)
  • src/prompts/update-tasks.json (1 hunks)
  • src/schemas/add-task.js (1 hunks)
  • src/schemas/analyze-complexity.js (1 hunks)
  • src/schemas/base-schemas.js (1 hunks)
  • src/schemas/expand-task.js (1 hunks)
  • src/schemas/parse-prd.js (1 hunks)
  • src/schemas/registry.js (1 hunks)
  • src/schemas/update-subtask.js (1 hunks)
  • src/schemas/update-task.js (1 hunks)
  • src/schemas/update-tasks.js (1 hunks)
  • tests/unit/prompts/prompt-migration.test.js (1 hunks)
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (8 hunks)
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js (3 hunks)
  • tests/unit/scripts/modules/task-manager/expand-task.test.js (15 hunks)
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (1 hunks)
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js (7 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • src/schemas/add-task.js
  • src/ai-providers/base-provider.js
  • src/schemas/update-task.js
  • src/schemas/parse-prd.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • src/schemas/registry.js
  • scripts/modules/task-manager/analyze-task-complexity.js
  • src/schemas/update-tasks.js
  • src/schemas/base-schemas.js
  • src/schemas/expand-task.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/parse-prd.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • mcp-server/src/custom-sdk/schema-converter.js
  • src/schemas/update-subtask.js
  • src/schemas/analyze-complexity.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/base-provider.js
tests/{unit,integration,e2e,fixtures}/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)

**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/unit/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/{unit,integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.{test,spec}.*

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test files should follow naming conventions: .test., .spec., or _test. depending on the language

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/{unit,integration,e2e}/**

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
.changeset/*.md

📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)

.changeset/*.md: When running npm run changeset or npx changeset add, provide a concise summary of the changes for the CHANGELOG.md in imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.

Files:

  • .changeset/gentle-cats-dance.md
.changeset/*

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.

Files:

  • .changeset/gentle-cats-dance.md
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
🧠 Learnings (34)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Use `generateTextService` and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Use `generateTextService` and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Each task object must include all required properties (id, title, description, status, dependencies, priority, details, testStrategy, subtasks) and provide default values for optional properties. Extra properties not in the standard schema must not be added.

Applied to files:

  • src/schemas/add-task.js
  • src/schemas/update-tasks.js
  • src/schemas/base-schemas.js
  • scripts/modules/task-manager/parse-prd.js
📚 Learning: 2025-07-20T01:35:05.831Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/task-manager/parse-prd.js:226-226
Timestamp: 2025-07-20T01:35:05.831Z
Learning: The parsePRD function in scripts/modules/task-manager/parse-prd.js has a different parameter structure than other task-manager functions - it uses `options` parameter instead of `context` parameter because it generates tasks from PRD documents rather than operating on existing tasks.

Applied to files:

  • src/schemas/parse-prd.js
  • scripts/modules/task-manager/parse-prd.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Extract tasks from PRD documents using AI, create them in the current tag context (defaulting to 'master'), provide clear prompts to guide AI task generation, and validate/clean up AI-generated tasks.

Applied to files:

  • src/schemas/parse-prd.js
  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • src/prompts/parse-prd.json
  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • .changeset/gentle-cats-dance.md
  • scripts/modules/task-manager/analyze-task-complexity.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • scripts/modules/task-manager/update-tasks.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Module dependencies should be mocked before importing the test module, following Jest's hoisting behavior.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:09:13.815Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.

Applied to files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use AI to generate detailed subtasks within the current tag context, considering complexity analysis for subtask counts and ensuring proper IDs for newly created subtasks.

Applied to files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.

Applied to files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Applied to files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • scripts/modules/task-manager/update-tasks.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • scripts/modules/task-manager/analyze-task-complexity.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • scripts/modules/task-manager/update-tasks.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Use `generateTextService` and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • scripts/modules/task-manager/update-tasks.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/ai-services.js : Features that use AI models belong in 'scripts/modules/ai-services.js'.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Use `generateTextService` and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.

Applied to files:

  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use consistent formatting for task files, include all task properties in text files, and format dependencies with status indicators.

Applied to files:

  • scripts/modules/task-manager/update-tasks.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • scripts/modules/task-manager/update-task-by-id.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.

Applied to files:

  • scripts/modules/task-manager/update-tasks.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Verify modifications on the in-memory task objects passed to writeJSON.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.

Applied to files:

  • tests/unit/prompts/prompt-migration.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.

Applied to files:

  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Provide functions for updating task status within the current tag context, handling both individual tasks and subtasks, and considering subtask status when updating parent tasks.

Applied to files:

  • scripts/modules/task-manager/update-task-by-id.js
📚 Learning: 2025-07-31T22:07:49.716Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/commands.mdc:0-0
Timestamp: 2025-07-31T22:07:49.716Z
Learning: Applies to scripts/modules/commands.js : For AI-powered commands that benefit from project context, use the ContextGatherer utility for multi-source context extraction, support task IDs, file paths, custom context, and project tree, implement fuzzy search for automatic task discovery, and display detailed token breakdown for transparency.

Applied to files:

  • scripts/modules/task-manager/add-task.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: When breaking down complex tasks in Taskmaster, use the `expand_task` command with appropriate flags (`--num`, `--research`, `--force`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
🧬 Code graph analysis (12)
src/schemas/update-task.js (1)
src/schemas/update-tasks.js (2)
  • UpdatedTaskSchema (4-6)
  • UpdatedTaskSchema (4-6)
src/schemas/registry.js (7)
src/schemas/update-tasks.js (2)
  • UpdateTasksResponseSchema (8-10)
  • UpdateTasksResponseSchema (8-10)
src/schemas/expand-task.js (2)
  • ExpandTaskResponseSchema (4-6)
  • ExpandTaskResponseSchema (4-6)
src/schemas/analyze-complexity.js (2)
  • ComplexityAnalysisResponseSchema (12-14)
  • ComplexityAnalysisResponseSchema (12-14)
src/schemas/update-subtask.js (2)
  • UpdateSubtaskResponseSchema (4-6)
  • UpdateSubtaskResponseSchema (4-6)
src/schemas/update-task.js (2)
  • UpdateTaskResponseSchema (4-6)
  • UpdateTaskResponseSchema (4-6)
src/schemas/add-task.js (2)
  • AddTaskResponseSchema (4-21)
  • AddTaskResponseSchema (4-21)
src/schemas/parse-prd.js (2)
  • ParsePRDResponseSchema (16-18)
  • ParsePRDResponseSchema (16-18)
scripts/modules/task-manager/analyze-task-complexity.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
src/schemas/update-tasks.js (1)
src/schemas/base-schemas.js (2)
  • BaseTaskSchema (13-25)
  • BaseTaskSchema (13-25)
src/schemas/expand-task.js (1)
src/schemas/base-schemas.js (2)
  • SubtaskSchema (27-35)
  • SubtaskSchema (27-35)
scripts/modules/task-manager/update-tasks.js (2)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/ai-services-unified.js (1)
  • isMCP (785-785)
scripts/modules/task-manager/parse-prd.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
src/schemas/update-subtask.js (1)
src/schemas/base-schemas.js (2)
  • SubtaskSchema (27-35)
  • SubtaskSchema (27-35)
tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js (1)
scripts/modules/commands.js (1)
  • commandName (789-789)
scripts/modules/task-manager/update-task-by-id.js (2)
scripts/modules/ai-services-unified.js (1)
  • isMCP (785-785)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/add-task.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/expand-task.js (2)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/expand-all-tasks.js (1)
  • logger (53-70)
🪛 markdownlint-cli2 (0.18.1)
.changeset/gentle-cats-dance.md

5-5: First line in a file should be a top-level heading

(MD041, first-line-heading, first-line-h1)


30-30: Files should end with a single newline character

(MD047, single-trailing-newline)

🔇 Additional comments (16)
src/prompts/parse-prd.json (1)

47-48: LGTM! Clear output constraints align with schema validation.

The updated prompts explicitly constrain the AI output to a single tasks property, removing metadata guidance. This aligns well with the schema-driven approach using ParsePRDResponseSchema from the registry for validation.

The explicit instruction at the end of the user prompt ensures the AI understands the expected output structure.

src/ai-providers/base-provider.js (1)

216-217: LGTM! Schema metadata improves provider communication.

Adding schemaName and schemaDescription helps providers that need explicit schema information (like Claude Code in JSON mode) understand the expected output structure.

src/schemas/expand-task.js (1)

1-6: LGTM! Clean schema composition.

The schema properly imports and reuses SubtaskSchema from base schemas, following the DRY principle. The structure aligns with the response format expected by the expand-task command.

src/schemas/update-task.js (1)

1-6: Schema structure is correct.

The schema properly wraps UpdatedTaskSchema in a response object with a single task property, following the established pattern for response schemas in this PR.

src/schemas/update-tasks.js (1)

8-10: Response schema structure is correct.

The UpdateTasksResponseSchema properly wraps an array of UpdatedTaskSchema in a tasks property, following the established response wrapper pattern.

src/schemas/update-subtask.js (1)

1-6: Remove generateObjectService reliability warning for this file
This file only defines a Zod schema and doesn’t invoke or integrate with generateObjectService, so the reliability concerns aren’t applicable here.

Likely an incorrect or invalid review comment.

src/schemas/parse-prd.js (1)

1-18: LGTM with architectural caveat!

The ParsePRDResponseSchema is well-structured with:

  • Proper validation constraints (positive integers for IDs, min length for strings)
  • Appropriate nullable handling for optional AI-generated fields
  • Valid priority enum values matching task priority constants

The schema correctly models the expected PRD parsing output.

However, note the broader architectural concern: this schema supports generateObjectService usage, which conflicts with established coding guidelines recommending generateTextService + manual parsing. Pending resolution of the architectural decision (see add-task.js comments).

scripts/modules/task-manager/parse-prd.js (1)

22-22: Schema integration implemented correctly but architectural concern remains.

The integration with COMMAND_SCHEMAS['parse-prd'] is properly implemented, and the objectName change from 'tasks_data' to 'tasks' correctly aligns with the schema definition. The robust extraction logic (lines 200-218) defensively handles multiple response shapes, which is good error handling.

However, the same critical architectural concern applies: this implementation uses generateObjectService, which directly conflicts with established coding guidelines and learnings that recommend generateTextService + manual parsing for reliability across providers.

Based on learnings: Multiple sources warn about generateObjectService unreliability with some providers/schemas and recommend the text-based approach instead.

Also applies to: 180-181

src/prompts/analyze-complexity.json (1)

37-38: Prompts are well-engineered for structured output but rely on discouraged architecture.

The updated prompts provide clear, detailed instructions for generating structured JSON with specific property names and field requirements. The multiple warnings about exact property naming ("CRITICAL: The property name MUST be 'complexityAnalysis'") suggest awareness of potential schema mismatches.

However, these prompts are designed for generateObjectService, which conflicts with coding guidelines recommending generateTextService + manual parsing for reliability. The heavy emphasis on exact formatting hints at fragility in the structured output approach.

Based on learnings: The guidelines recommend against generateObjectService due to reliability issues. If proceeding with this architecture, ensure thorough testing across all configured providers (Anthropic, OpenAI, etc.) to validate consistent structured output handling.

⛔ Skipped due to learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-09-24T15:12:12.658Z
Learning: Analyze task complexity with task-master analyze-complexity --research and use results to plan breakdown
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.
tests/unit/prompts/prompt-migration.test.js (1)

1-7: LGTM!

The ESM imports and path setup are correct. The relative path construction properly locates the prompts directory from the test file location.

mcp-server/src/custom-sdk/schema-converter.js (1)

120-121: LGTM!

Changing the boolean placeholder from false to '<boolean>' improves consistency with other type handlers and makes the placeholder nature explicit rather than looking like a default value.

src/prompts/expand-task.json (1)

57-71: LGTM - Consistent structured output instructions.

All three prompt variants (complexity-report, research, default) consistently specify the same required fields and sequential ID rules. The explicit field requirements and CRITICAL sequencing instructions appropriately support the migration to generateObjectService with structured schemas.

The repeated emphasis on sequential IDs starting from nextSubtaskId and prohibition of parent task ID usage is clear and should prevent ID conflicts.

tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (2)

1-453: Test migration appears complete, but note reliability concerns.

The test successfully migrates from generateTextService to generateObjectService, with all assertions and mocks updated appropriately. However, based on learnings, generateObjectService has shown reliability issues with some providers and complex schemas. While these unit tests properly verify the code structure, consider adding integration tests with real providers to validate reliability for the analyze-complexity command.

Based on learnings


413-417: No change needed: prompt is still the string returned by promptManager.loadPrompt, so using expect.not.stringContaining('"id": 3') remains valid.

tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js (2)

180-261: LGTM - generateObjectService mock properly implements tag isolation.

The mock correctly returns different complexity analysis data based on tag context (lines 185-195), enabling verification of tag-isolated complexity reports. The structure matches the expected mainResult.complexityAnalysis format for the new object-based responses.

The tag detection via params?.prompt?.includes('feature') is consistent with the existing generateTextService mock pattern, maintaining test coherence.


470-1177: Comprehensive tag isolation test coverage maintained.

The test suite thoroughly validates that complexity reports remain isolated across tags (master, feature, special characters) with the migration to generateObjectService. All test scenarios (path resolution, analysis generation, report reading, expand task integration, and cross-tag contamination prevention) are properly updated and should continue to catch tag isolation issues.

@Crunchyman-ralph Crunchyman-ralph force-pushed the feat-generate-object-refactor-v3 branch from 4367df7 to 46e3532 Compare October 1, 2025 19:46
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
.changeset/gentle-cats-dance.md (1)

1-30: Add the required heading and trailing newline.

The earlier review is still open: markdownlint is flagging the missing top-level heading (Line 5) and the absent trailing newline (Line 30). Please add the heading (e.g., # Migrate AI services…) and ensure the file ends with a newline.

mcp-server/src/custom-sdk/schema-converter.js (2)

77-96: Arbitrary threshold still breaks constraint handling.

The logic at line 82 still checks minCheck.value >= 20, creating the exact inconsistent behavior flagged in the previous review:

  • A min of 19 returns '<string>' (constraint ignored)
  • A min of 20 returns '<string with at least 20 characters>'
  • A min of 19 with max of 100 returns '<string>' (both ignored)

This arbitrary threshold undermines the accuracy of AI-generated placeholders and breaks schema contract fidelity.

Apply the previously suggested fix to handle all constraints consistently:

 		case 'ZodString':
 			// Check for min/max length constraints
 			if (def.checks) {
 				const minCheck = def.checks.find((c) => c.kind === 'min');
 				const maxCheck = def.checks.find((c) => c.kind === 'max');
-				if (minCheck && minCheck.value >= 20) {
-					return '<string with at least ' + minCheck.value + ' characters>';
-				} else if (minCheck && maxCheck) {
+				if (minCheck && maxCheck) {
 					return (
 						'<string between ' +
 						minCheck.value +
 						'-' +
 						maxCheck.value +
 						' characters>'
 					);
+				} else if (minCheck) {
+					return '<string with at least ' + minCheck.value + ' characters>';
 				} else if (maxCheck) {
 					return '<string up to ' + maxCheck.value + ' characters>';
 				}
 			}
 			return '<string>';

98-118: Integer max constraints still ignored.

The if-else chain still causes integer schemas with max constraints to return '<integer>' or '<positive integer>', completely ignoring the max value:

  • z.number().int().max(100)'<integer>' (max ignored)
  • z.number().int().min(5).max(100)'<positive integer>' (max ignored)

This breaks the schema-to-placeholder contract and misleads the AI about valid value ranges.

Apply the previously suggested refactor to handle all constraint combinations:

 		case 'ZodNumber':
 			// Check for int, positive, min/max constraints
 			if (def.checks) {
 				const intCheck = def.checks.find((c) => c.kind === 'int');
 				const minCheck = def.checks.find((c) => c.kind === 'min');
 				const maxCheck = def.checks.find((c) => c.kind === 'max');

-				if (intCheck && minCheck && minCheck.value > 0) {
-					return '<positive integer>';
-				} else if (intCheck) {
-					return '<integer>';
-				} else if (minCheck || maxCheck) {
+				if (intCheck) {
+					// Handle integer with constraints
+					if (minCheck && minCheck.value > 0 && !maxCheck) {
+						return '<positive integer>';
+					}
+					const rangeDesc = 
+						(minCheck ? ' >= ' + minCheck.value : '') +
+						(maxCheck ? ' <= ' + maxCheck.value : '');
+					return rangeDesc ? '<integer' + rangeDesc + '>' : '<integer>';
+				} else if (minCheck || maxCheck) {
+					// Handle number with range constraints
 					return (
 						'<number' +
 						(minCheck ? ' >= ' + minCheck.value : '') +
 						(maxCheck ? ' <= ' + maxCheck.value : '') +
 						'>'
 					);
 				}
 			}
 			return '<number>';
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4367df7 and 46e3532.

📒 Files selected for processing (2)
  • .changeset/gentle-cats-dance.md (1 hunks)
  • mcp-server/src/custom-sdk/schema-converter.js (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
.changeset/*.md

📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)

.changeset/*.md: When running npm run changeset or npx changeset add, provide a concise summary of the changes for the CHANGELOG.md in imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.

Files:

  • .changeset/gentle-cats-dance.md
.changeset/*

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.

Files:

  • .changeset/gentle-cats-dance.md
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • mcp-server/src/custom-sdk/schema-converter.js
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Use `generateTextService` and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Use `generateTextService` and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as `generateObjectService` has shown unreliability with some providers/schemas.
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.

Applied to files:

  • .changeset/gentle-cats-dance.md
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.

Applied to files:

  • .changeset/gentle-cats-dance.md
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • .changeset/gentle-cats-dance.md
🪛 markdownlint-cli2 (0.18.1)
.changeset/gentle-cats-dance.md

5-5: First line in a file should be a top-level heading

(MD041, first-line-heading, first-line-h1)


30-30: Files should end with a single newline character

(MD047, single-trailing-newline)

🔇 Additional comments (1)
mcp-server/src/custom-sdk/schema-converter.js (1)

11-49: Dismiss irrelevant generateObjectService reliability concern. This converter solely builds JSON prompt instructions and does not call or depend on generateObjectService; the past learnings about generateObjectService reliability don’t apply here.

Likely an incorrect or invalid review comment.

Ben Vargas and others added 8 commits October 2, 2025 11:54
Fixed issue where AI was generating inconsistent subtask IDs (101-105, 601-603)
instead of sequential numbering (1, 2, 3...) after the generateObject migration.

Changes:
- Updated all expand-task prompt variants with forceful "CRITICAL" instructions
- Made ID requirements explicit with examples: id={{nextSubtaskId}}, id={{nextSubtaskId}}+1
- Added warning not to use parent task ID in subtask numbering
- Removed parseSubtasksFromText post-processing that was overwriting AI-generated IDs

This ensures subtasks display correctly as X.1, X.2, X.3 format and the
`tm show X.Y` command works as expected.
Integration tests that make real API calls cannot run in CI without
proper API keys. These tests should either be mocked or run in a
separate test suite with appropriate infrastructure.
@Crunchyman-ralph Crunchyman-ralph force-pushed the feat-generate-object-refactor-v3 branch from 46e3532 to 7660a29 Compare October 2, 2025 09:54
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
scripts/modules/task-manager/expand-task.js (2)

146-176: Guard complexityReportPath before using FS APIs.

complexityReportPath comes straight from context and defaults to undefined. Calling fs.existsSync(complexityReportPath) or readJSON(complexityReportPath) with undefined will throw (“path must be a string or Buffer”) the moment the caller doesn’t provide an explicit path (current CLI code doesn’t). Derive the path yourself—e.g. fall back to getTagAwareFilePath(COMPLEXITY_REPORT_FILE, { projectRoot, tag })—before touching the file system.


150-175: Use tag-aware read helper consistently.

Per task-manager guidelines, core functions should call readJSON(path, projectRoot, tag) so tag awareness and caching stay intact. The new complexity-report branch calls readJSON(complexityReportPath) without the extra arguments, bypassing the helper logic. Pass projectRoot and tag (once the path issue above is fixed) to stay aligned with the shared utility.

♻️ Duplicate comments (7)
.changeset/gentle-cats-dance.md (1)

5-30: Condense the changeset to a single imperative summary.

This changeset still reads like release notes. Per repo guidance, the body must be a single, user-facing imperative sentence, plus the lint fix requires a top-level heading and a trailing newline. Please collapse the narrative into one terse summary and reintroduce the H1.

 ---
 "task-master-ai": minor
 ---
 
- 
-Migrate AI services to use generateObject for structured data generation
-
-This update migrates all AI service calls from generateText to generateObject, ensuring more reliable and structured responses across all commands.
-
-### Key Changes:
-
-- **Unified AI Service**: Replaced separate generateText implementations with a single generateObjectService that handles structured data generation
-- **JSON Mode Support**: Added proper JSON mode configuration for providers that support it (OpenAI, Anthropic, Google, Groq)
-- **Schema Validation**: Integrated Zod schemas for all AI-generated content with automatic validation
-- **Provider Compatibility**: Maintained compatibility with all existing providers while leveraging their native structured output capabilities
-- **Improved Reliability**: Structured output generation reduces parsing errors and ensures consistent data formats
-
-### Technical Improvements:
-
-- Centralized provider configuration in `ai-providers-unified.js`
-- Added `generateObject` support detection for each provider
-- Implemented proper error handling for schema validation failures
-- Maintained backward compatibility with existing prompt structures
-
-### Bug Fixes:
-
-- Fixed subtask ID numbering issue where AI was generating inconsistent IDs (101-105, 601-603) instead of sequential numbering (1, 2, 3...)
-- Enhanced prompt instructions to enforce proper ID generation patterns
-- Ensured subtasks display correctly as X.1, X.2, X.3 format
-
-This migration improves the reliability and consistency of AI-generated content throughout the Task Master application.
+# Adopt generateObject with schemas for AI responses
+

As per coding guidelines

tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (1)

68-82: Remove unused generateTextService mock.

The test now uses generateObjectService exclusively. This unused mock should be removed to avoid confusion (same issue previously flagged).

Apply this diff:

 		generateObjectService: jest.fn().mockResolvedValue({
 			mainResult: {
 				complexityAnalysis: []
 			},
 			telemetryData: {
 				timestamp: new Date().toISOString(),
 				userId: '1234567890',
 				commandName: 'analyze-complexity',
 				modelUsed: 'claude-3-5-sonnet',
 				providerName: 'anthropic',
 				inputTokens: 1000,
 				outputTokens: 500,
 				totalTokens: 1500,
 				totalCost: 0.012414,
 				currency: 'USD'
 			}
-		}),
-		generateTextService: jest.fn().mockResolvedValue({
-			mainResult: '[]',
-			telemetryData: {
-				timestamp: new Date().toISOString(),
-				userId: '1234567890',
-				commandName: 'analyze-complexity',
-				modelUsed: 'claude-3-5-sonnet',
-				providerName: 'anthropic',
-				inputTokens: 1000,
-				outputTokens: 500,
-				totalTokens: 1500,
-				totalCost: 0.012414,
-				currency: 'USD'
-			}
-		}),
+		}),
src/ai-providers/base-provider.js (1)

275-278: Refactor hard-coded provider detection.

This hard-coded string comparison is fragile and not extensible (same issue previously flagged).

As recommended in the previous review, add a provider-configurable boolean property:

+	/**
+	 * Whether this provider needs explicit schema in JSON mode
+	 * Can be overridden by subclasses
+	 */
+	needsExplicitJsonSchema = false;
+
 	async generateObject(params) {
 		try {
 			this.validateParams(params);
 			this.validateMessages(params.messages);
 
 			if (!params.schema) {
 				throw new Error('Schema is required for object generation');
 			}
 			if (!params.objectName) {
 				throw new Error('Object name is required for object generation');
 			}
 
 			log(
 				'debug',
 				`Generating ${this.name} object ('${params.objectName}') with model: ${params.modelId}`
 			);
 
 			const client = await this.getClient(params);
-
-			// For providers that don't support tool mode (like claude-code),
-			// we need to ensure the schema is properly communicated in the prompt
-			const needsExplicitSchema = this.name === 'Claude Code';
-
 			const result = await generateObject({
 				model: client(params.modelId),
 				messages: params.messages,
 				schema: params.schema,
-				mode: needsExplicitSchema ? 'json' : 'auto',
+				mode: this.needsExplicitJsonSchema ? 'json' : 'auto',
 				schemaName: params.objectName,
 				schemaDescription: `Generate a valid JSON object for ${params.objectName}`,
 				maxTokens: params.maxTokens,
 				temperature: params.temperature
 			});

Then in src/ai-providers/claude-code.js:

constructor() {
	super();
	this.needsExplicitJsonSchema = true;
}
src/schemas/analyze-complexity.js (1)

3-10: Add validation constraints for consistency.

The schema lacks length constraints on string fields and bounds on recommendedSubtasks, which is inconsistent with other schemas in this PR (same issue previously flagged).

Apply these constraints to ensure data quality:

 export const ComplexityAnalysisItemSchema = z.object({
 	taskId: z.number().int().positive(),
-	taskTitle: z.string(),
+	taskTitle: z.string().trim().min(5).max(200),
 	complexityScore: z.number().min(1).max(10),
-	recommendedSubtasks: z.number().int().positive(),
-	expansionPrompt: z.string(),
-	reasoning: z.string()
+	recommendedSubtasks: z.number().int().positive().max(50),
+	expansionPrompt: z.string().trim().min(10).max(2000),
+	reasoning: z.string().trim().min(10).max(2000)
 });
tests/unit/prompts/prompt-migration.test.js (2)

18-45: Fix allowed context lookup

allowedContexts is keyed by "markdown formatting", but the lookup uses each banned phrase (allowedContexts[lowerPhrase]). This always yields undefined, so the “allowed context” escape hatch will never trigger. Map the banned phrase itself (in lowercase) to its permitted contexts so the guard actually works.

-	const allowedContexts = {
-		'markdown formatting': ['Use markdown formatting for better readability']
-	};
+	const allowedContexts = {
+		'respond only with': ['Use markdown formatting for better readability'],
+		'return only the': ['Use markdown formatting for better readability']
+	};

31-33: Remove unused promptData parse

promptData is parsed but never referenced. Drop the parse (or assert on its fields) to avoid dead code.

-			const content = fs.readFileSync(path.join(promptsDir, file), 'utf8');
-			const promptData = JSON.parse(content);
+			const content = fs.readFileSync(path.join(promptsDir, file), 'utf8');
src/schemas/base-schemas.js (1)

27-35: Loosen SubtaskSchema to preserve legacy subtasks

generateObjectService now validates against this schema before we reconcile AI output with the cached task. Many persisted subtasks (and even the prompts we hand the model) still carry details: null or very short titles/descriptions. As soon as the model echoes one of those back—which happens when we preserve completed work—the schema throws and the update hard-fails. Please relax these minima and allow nullable details so existing data keeps working.

 export const SubtaskSchema = z.object({
 	id: z.number().int().positive(),
-	title: z.string().min(5).max(200),
-	description: z.string().min(10),
-	dependencies: z.array(z.number().int()).default([]),
-	details: z.string().min(20),
+	title: z.string().min(1).max(200),
+	description: z.string().min(1),
+	dependencies: z.array(z.number().int()).default([]),
+	details: z.string().nullable().default(null),
 	status: z.enum(['pending', 'done', 'completed']).default('pending'),
 	testStrategy: z.string().nullable().default(null)
 });
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46e3532 and 7660a29.

📒 Files selected for processing (28)
  • .changeset/gentle-cats-dance.md (1 hunks)
  • mcp-server/src/custom-sdk/schema-converter.js (1 hunks)
  • scripts/modules/task-manager/add-task.js (2 hunks)
  • scripts/modules/task-manager/analyze-task-complexity.js (3 hunks)
  • scripts/modules/task-manager/expand-task.js (3 hunks)
  • scripts/modules/task-manager/update-task-by-id.js (3 hunks)
  • scripts/modules/task-manager/update-tasks.js (2 hunks)
  • src/ai-providers/base-provider.js (1 hunks)
  • src/prompts/analyze-complexity.json (1 hunks)
  • src/prompts/expand-task.json (1 hunks)
  • src/prompts/parse-prd.json (1 hunks)
  • src/prompts/update-task.json (1 hunks)
  • src/prompts/update-tasks.json (1 hunks)
  • src/schemas/add-task.js (1 hunks)
  • src/schemas/analyze-complexity.js (1 hunks)
  • src/schemas/base-schemas.js (1 hunks)
  • src/schemas/expand-task.js (1 hunks)
  • src/schemas/parse-prd.js (1 hunks)
  • src/schemas/registry.js (1 hunks)
  • src/schemas/update-subtask.js (1 hunks)
  • src/schemas/update-task.js (1 hunks)
  • src/schemas/update-tasks.js (1 hunks)
  • tests/unit/prompts/prompt-migration.test.js (1 hunks)
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (6 hunks)
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js (3 hunks)
  • tests/unit/scripts/modules/task-manager/expand-task.test.js (15 hunks)
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (2 hunks)
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js (7 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • src/schemas/add-task.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • src/ai-providers/base-provider.js
  • src/schemas/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • scripts/modules/task-manager/expand-task.js
  • src/schemas/base-schemas.js
  • src/schemas/analyze-complexity.js
  • src/schemas/registry.js
  • mcp-server/src/custom-sdk/schema-converter.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • src/schemas/update-tasks.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • src/schemas/update-task.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • src/schemas/update-subtask.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
  • src/schemas/expand-task.js
tests/{unit,integration,e2e,fixtures}/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)

**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
tests/unit/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
tests/{unit,integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
**/*.{test,spec}.*

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test files should follow naming conventions: .test., .spec., or _test. depending on the language

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
tests/{unit,integration,e2e}/**

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible

Files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/base-provider.js
.changeset/*.md

📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)

.changeset/*.md: When running npm run changeset or npx changeset add, provide a concise summary of the changes for the CHANGELOG.md in imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.

Files:

  • .changeset/gentle-cats-dance.md
.changeset/*

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.

Files:

  • .changeset/gentle-cats-dance.md
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
🧠 Learnings (34)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Each task object must include all required properties (id, title, description, status, dependencies, priority, details, testStrategy, subtasks) and provide default values for optional properties. Extra properties not in the standard schema must not be added.

Applied to files:

  • src/schemas/add-task.js
  • src/schemas/base-schemas.js
  • src/schemas/update-tasks.js
📚 Learning: 2025-10-01T19:53:34.261Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.

Applied to files:

  • src/schemas/add-task.js
  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • src/schemas/parse-prd.js
  • .changeset/gentle-cats-dance.md
  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Verify modifications on the in-memory task objects passed to writeJSON.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/ai-services.js : Features that use AI models belong in 'scripts/modules/ai-services.js'.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-tasks.test.js
  • src/ai-providers/base-provider.js
  • scripts/modules/task-manager/update-task-by-id.js
📚 Learning: 2025-07-20T01:35:05.831Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/task-manager/parse-prd.js:226-226
Timestamp: 2025-07-20T01:35:05.831Z
Learning: The parsePRD function in scripts/modules/task-manager/parse-prd.js has a different parameter structure than other task-manager functions - it uses `options` parameter instead of `context` parameter because it generates tasks from PRD documents rather than operating on existing tasks.

Applied to files:

  • src/schemas/parse-prd.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Extract tasks from PRD documents using AI, create them in the current tag context (defaulting to 'master'), provide clear prompts to guide AI task generation, and validate/clean up AI-generated tasks.

Applied to files:

  • src/schemas/parse-prd.js
  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/expand-task.js
  • src/prompts/parse-prd.json
  • scripts/modules/task-manager/add-task.js
  • scripts/modules/task-manager/update-tasks.js
📚 Learning: 2025-07-18T17:07:53.100Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/changeset.mdc:0-0
Timestamp: 2025-07-18T17:07:53.100Z
Learning: Applies to .changeset/*.md : When running `npm run changeset` or `npx changeset add`, provide a concise summary of the changes for the `CHANGELOG.md` in imperative mood, typically a single line, and not a detailed Git commit message.

Applied to files:

  • .changeset/gentle-cats-dance.md
📚 Learning: 2025-07-18T17:07:53.100Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/changeset.mdc:0-0
Timestamp: 2025-07-18T17:07:53.100Z
Learning: Applies to .changeset/*.md : The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.

Applied to files:

  • .changeset/gentle-cats-dance.md
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • .changeset/gentle-cats-dance.md
  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use consistent formatting for task files, include all task properties in text files, and format dependencies with status indicators.

Applied to files:

  • scripts/modules/task-manager/update-task-by-id.js
  • src/schemas/base-schemas.js
  • scripts/modules/task-manager/update-tasks.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use AI to generate detailed subtasks within the current tag context, considering complexity analysis for subtask counts and ensuring proper IDs for newly created subtasks.

Applied to files:

  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • scripts/modules/task-manager/update-task-by-id.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • scripts/modules/task-manager/add-task.js
  • tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Provide functions for updating task status within the current tag context, handling both individual tasks and subtasks, and considering subtask status when updating parent tasks.

Applied to files:

  • scripts/modules/task-manager/update-task-by-id.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Applied to files:

  • scripts/modules/task-manager/update-task-by-id.js
  • scripts/modules/task-manager/update-tasks.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: When breaking down complex tasks in Taskmaster, use the `expand_task` command with appropriate flags (`--num`, `--research`, `--force`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: When breaking down complex tasks, use the `expand_task` command with appropriate flags (`--force`, `--research`, `--num`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:13.815Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • scripts/modules/task-manager/analyze-task-complexity.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.

Applied to files:

  • tests/unit/prompts/prompt-migration.test.js
📚 Learning: 2025-07-31T22:07:49.716Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/commands.mdc:0-0
Timestamp: 2025-07-31T22:07:49.716Z
Learning: Applies to scripts/modules/commands.js : For AI-powered commands that benefit from project context, use the ContextGatherer utility for multi-source context extraction, support task IDs, file paths, custom context, and project tree, implement fuzzy search for automatic task discovery, and display detailed token breakdown for transparency.

Applied to files:

  • scripts/modules/task-manager/add-task.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Module dependencies should be mocked before importing the test module, following Jest's hoisting behavior.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.

Applied to files:

  • scripts/modules/task-manager/update-tasks.js
🧬 Code graph analysis (16)
tests/unit/scripts/modules/task-manager/update-tasks.test.js (1)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (1)
  • generateObjectService (142-142)
src/ai-providers/base-provider.js (2)
scripts/modules/ai-services-unified.js (1)
  • result (404-404)
scripts/modules/task-manager/parse-prd/parse-prd-streaming.js (1)
  • result (559-568)
scripts/modules/task-manager/update-task-by-id.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
tests/unit/scripts/modules/task-manager/expand-task.test.js (1)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (1)
  • generateObjectService (142-142)
scripts/modules/task-manager/expand-task.js (3)
scripts/modules/task-manager/scope-adjustment.js (1)
  • generatedSubtasks (380-380)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/expand-all-tasks.js (1)
  • logger (53-70)
src/schemas/registry.js (7)
src/schemas/update-tasks.js (2)
  • UpdateTasksResponseSchema (8-10)
  • UpdateTasksResponseSchema (8-10)
src/schemas/expand-task.js (2)
  • ExpandTaskResponseSchema (4-6)
  • ExpandTaskResponseSchema (4-6)
src/schemas/analyze-complexity.js (2)
  • ComplexityAnalysisResponseSchema (12-14)
  • ComplexityAnalysisResponseSchema (12-14)
src/schemas/update-subtask.js (2)
  • UpdateSubtaskResponseSchema (4-6)
  • UpdateSubtaskResponseSchema (4-6)
src/schemas/update-task.js (2)
  • UpdateTaskResponseSchema (4-6)
  • UpdateTaskResponseSchema (4-6)
src/schemas/add-task.js (2)
  • AddTaskResponseSchema (4-21)
  • AddTaskResponseSchema (4-21)
src/schemas/parse-prd.js (2)
  • ParsePRDResponseSchema (16-18)
  • ParsePRDResponseSchema (16-18)
tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (1)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (1)
  • generateObjectService (142-142)
src/schemas/update-tasks.js (1)
src/schemas/base-schemas.js (4)
  • BaseTaskSchema (13-25)
  • BaseTaskSchema (13-25)
  • SubtaskSchema (27-35)
  • SubtaskSchema (27-35)
scripts/modules/task-manager/add-task.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (3)
scripts/modules/task-manager/scope-adjustment.js (1)
  • updatedTaskData (558-558)
mcp-server/src/core/direct-functions/parse-prd.js (1)
  • result (155-172)
mcp-server/src/core/direct-functions/research.js (1)
  • result (129-135)
src/schemas/update-task.js (1)
src/schemas/update-tasks.js (2)
  • UpdatedTaskSchema (4-6)
  • UpdatedTaskSchema (4-6)
tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js (1)
scripts/modules/commands.js (1)
  • commandName (808-808)
src/schemas/update-subtask.js (1)
src/schemas/base-schemas.js (2)
  • SubtaskSchema (27-35)
  • SubtaskSchema (27-35)
scripts/modules/task-manager/update-tasks.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/analyze-task-complexity.js (1)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
src/schemas/expand-task.js (1)
src/schemas/base-schemas.js (2)
  • SubtaskSchema (27-35)
  • SubtaskSchema (27-35)
🪛 GitHub Actions: CI
scripts/modules/task-manager/expand-task.js

[error] 23-23: [PARSE_ERROR] Identifier getDebugFlag has already been declared


[error] 22-22: [PARSE_ERROR] Identifier getDefaultSubtasks has already been declared


[error] 26-26: [PARSE_ERROR] Identifier getPromptManager has already been declared


[error] 29-29: [PARSE_ERROR] Identifier flattenTasksWithSubtasks has already been declared


[error] 29-29: [PARSE_ERROR] Identifier findProjectRoot has already been declared

tests/unit/scripts/modules/task-manager/update-task-by-id.test.js

[error] 224-229: Biome format check failed. Formatter would have printed content changes for the test fixture. Run 'biome format .' to fix code style issues. Command: npm run format-check (Biome format).

🪛 markdownlint-cli2 (0.18.1)
.changeset/gentle-cats-dance.md

5-5: First line in a file should be a top-level heading

(MD041, first-line-heading, first-line-h1)


30-30: Files should end with a single newline character

(MD047, single-trailing-newline)

🔇 Additional comments (6)
src/prompts/analyze-complexity.json (1)

47-48: LGTM! Prompts align with schema requirements.

The system and user prompts correctly specify all fields required by ComplexityAnalysisItemSchema (taskId, taskTitle, complexityScore, recommendedSubtasks, expansionPrompt, reasoning) and enforce the expected structured JSON output format.

src/prompts/expand-task.json (1)

71-83: LGTM! Prompts correctly enforce schema requirements.

The system and user prompts across all three variants (complexity-report, research, default) correctly specify all required SubtaskSchema fields and strongly emphasize sequential ID numbering starting from {{nextSubtaskId}}, which aligns with the schema-driven approach.

tests/unit/scripts/modules/task-manager/analyze-task-complexity.test.js (1)

310-315: LGTM! Mock correctly reflects new structured response format.

The mock setup correctly provides a complexityAnalysis field in mainResult, matching the new ComplexityAnalysisResponseSchema structure used by generateObjectService.

src/schemas/expand-task.js (1)

4-6: LGTM! Schema definition is correct.

The ExpandTaskResponseSchema correctly wraps an array of SubtaskSchema and aligns with the registry mapping and the command's expected output structure.

src/schemas/update-subtask.js (1)

4-6: LGTM! Schema definition is correct.

The UpdateSubtaskResponseSchema correctly wraps a single SubtaskSchema instance and aligns with the registry mapping for the update-subtask-by-id command.

src/schemas/update-task.js (1)

4-6: LGTM! Schema definition is correct.

The UpdateTaskResponseSchema correctly wraps a single UpdatedTaskSchema instance (which extends BaseTaskSchema with a subtasks field) and aligns with the registry mapping for the update-task-by-id command.

- adapt tests to new codebase
- improve integration tests by reducing the amount of tasks (to make the tests faster)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/cli/complex-cross-tag-scenarios.test.js (1)

351-363: Consider renaming the test to reflect its actual scope.

The dataset has been reduced from 100 to 50 tasks, with timing assertions removed. The test now validates move operation correctness on a moderate dataset rather than measuring performance at scale.

Consider renaming from 'should handle large task sets efficiently' to something like 'should handle moderate task sets correctly' or 'should move tasks with dependencies in larger datasets'.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7660a29 and 1d197fe.

📒 Files selected for processing (6)
  • apps/cli/src/utils/auto-update.ts (1 hunks)
  • scripts/modules/task-manager/expand-task.js (3 hunks)
  • src/prompts/analyze-complexity.json (1 hunks)
  • tests/integration/cli/complex-cross-tag-scenarios.test.js (4 hunks)
  • tests/unit/prompts/expand-task-prompt.test.js (2 hunks)
  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (2 hunks)
🧰 Additional context used
📓 Path-based instructions (14)
tests/{unit,integration,e2e,fixtures}/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)

**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/unit/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
tests/{unit,integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • scripts/modules/task-manager/expand-task.js
**/*.{test,spec}.*

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test files should follow naming conventions: .test., .spec., or _test. depending on the language

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e}/**

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible

Files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/integration/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

Properly mock session objects when required by functions, and reset environment variables between tests if modified.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/expand-task.js
🧠 Learnings (18)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Module dependencies should be mocked before importing the test module, following Jest's hoisting behavior.

Applied to files:

  • tests/unit/scripts/modules/task-manager/update-task-by-id.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.

Applied to files:

  • tests/unit/prompts/expand-task-prompt.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Assign unique sequential IDs to tasks within each tag context by calculating the next ID based on existing tasks in the current tag; do not hardcode or reuse IDs within the same tag.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use AI to generate detailed subtasks within the current tag context, considering complexity analysis for subtask counts and ensuring proper IDs for newly created subtasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-10-01T19:53:34.261Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Extract tasks from PRD documents using AI, create them in the current tag context (defaulting to 'master'), provide clear prompts to guide AI task generation, and validate/clean up AI-generated tasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: When breaking down complex tasks in Taskmaster, use the `expand_task` command with appropriate flags (`--num`, `--research`, `--force`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:13.815Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: When breaking down complex tasks, use the `expand_task` command with appropriate flags (`--force`, `--research`, `--num`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-31T22:07:49.716Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/commands.mdc:0-0
Timestamp: 2025-07-31T22:07:49.716Z
Learning: Applies to scripts/modules/commands.js : Follow the provided structure for removing subtasks, including options for conversion, file path, and regeneration, with detailed error handling.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:45.690Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dependencies.mdc:0-0
Timestamp: 2025-07-18T17:09:45.690Z
Learning: Applies to scripts/modules/dependency-manager.js : Remove references to non-existent tasks during validation

Applied to files:

  • scripts/modules/task-manager/expand-task.js
🧬 Code graph analysis (3)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (3)
scripts/modules/task-manager/scope-adjustment.js (1)
  • updatedTaskData (558-558)
mcp-server/src/core/direct-functions/parse-prd.js (1)
  • result (155-172)
scripts/modules/task-manager/research.js (2)
  • updateTaskById (587-587)
  • updateTaskById (726-726)
tests/integration/cli/complex-cross-tag-scenarios.test.js (3)
scripts/modules/commands.js (1)
  • tasksPath (5155-5155)
mcp-server/src/core/direct-functions/move-task-cross-tag.js (1)
  • tasksPath (82-82)
scripts/modules/ui.js (1)
  • tasksPath (1792-1792)
scripts/modules/task-manager/expand-task.js (3)
scripts/modules/task-manager/scope-adjustment.js (1)
  • generatedSubtasks (380-380)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/expand-all-tasks.js (1)
  • logger (53-70)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (4)
tests/unit/prompts/expand-task-prompt.test.js (1)

80-80: LGTM: Descriptive test name aligns with generateObject migration.

The updated test name clearly indicates validation of structured subtasks with required fields, which aligns well with the PR's migration to generateObject with Zod schemas.

apps/cli/src/utils/auto-update.ts (1)

161-165: LGTM! Test environment gate is correctly implemented.

The addition of NODE_ENV === 'test' properly prevents auto-updates during test execution, consistent with existing CI and skip flags.

tests/integration/cli/complex-cross-tag-scenarios.test.js (2)

333-333: LGTM: Comment accurately reflects reduced test dataset.

The comment correctly describes the new test workload of 50 tasks.


374-377: LGTM: Move operation correctly targets the last task in the chain.

The move operation targets task 25 (the last task in the master tag's sequential dependency chain) with the --with-dependencies flag, which will correctly move all 25 dependent tasks. The comment appropriately notes that timing assertions have been removed.

Comment on lines +140 to +341
describe('updateTaskById success path with generateObjectService', () => {
let fs;
let generateObjectService;

beforeEach(async () => {
jest.clearAllMocks();
jest.spyOn(process, 'exit').mockImplementation(() => {
throw new Error('process.exit called');
});
fs = await import('fs');
const aiServices = await import(
'../../../../../scripts/modules/ai-services-unified.js'
);
generateObjectService = aiServices.generateObjectService;
});

test('successfully updates task with all fields from generateObjectService', async () => {
fs.existsSync.mockReturnValue(true);
readJSON.mockReturnValue({
tag: 'master',
tasks: [
{
id: 1,
title: 'Original Task',
description: 'Original description',
status: 'pending',
dependencies: [],
priority: 'low',
details: null,
testStrategy: null,
subtasks: []
}
]
});

const updatedTaskData = {
id: 1,
title: 'Updated Task',
description: 'Updated description',
status: 'pending',
dependencies: [2],
priority: 'high',
details: 'New implementation details',
testStrategy: 'Unit tests required',
subtasks: [
{
id: 1,
title: 'Subtask 1',
description: 'First subtask',
status: 'pending',
dependencies: []
}
]
};

generateObjectService.mockResolvedValue({
mainResult: {
task: updatedTaskData
},
telemetryData: {
model: 'claude-3-5-sonnet-20241022',
inputTokens: 100,
outputTokens: 200
}
});

const result = await updateTaskById(
'tasks/tasks.json',
1,
'Update task with new requirements',
false,
{ tag: 'master' },
'json'
);

// Verify generateObjectService was called (not generateTextService)
expect(generateObjectService).toHaveBeenCalled();
const callArgs = generateObjectService.mock.calls[0][0];

// Verify correct arguments were passed
expect(callArgs).toMatchObject({
role: 'main',
commandName: 'update-task',
objectName: 'task'
});
expect(callArgs.schema).toBeDefined();
expect(callArgs.systemPrompt).toContain(
'update a software development task'
);
expect(callArgs.prompt).toContain('Update task with new requirements');

// Verify the returned task contains all expected fields
expect(result).toEqual({
updatedTask: expect.objectContaining({
id: 1,
title: 'Updated Task',
description: 'Updated description',
status: 'pending',
dependencies: [2],
priority: 'high',
details: 'New implementation details',
testStrategy: 'Unit tests required',
subtasks: expect.arrayContaining([
expect.objectContaining({
id: 1,
title: 'Subtask 1',
description: 'First subtask',
status: 'pending'
})
])
}),
telemetryData: expect.objectContaining({
model: 'claude-3-5-sonnet-20241022',
inputTokens: 100,
outputTokens: 200
}),
tagInfo: undefined
});
});

test('handles generateObjectService with malformed mainResult', async () => {
fs.existsSync.mockReturnValue(true);
readJSON.mockReturnValue({
tag: 'master',
tasks: [
{
id: 1,
title: 'Task',
description: 'Description',
status: 'pending',
dependencies: [],
priority: 'medium',
details: null,
testStrategy: null,
subtasks: []
}
]
});

generateObjectService.mockResolvedValue({
mainResult: {
task: null // Malformed: task is null
},
telemetryData: {}
});

await expect(
updateTaskById(
'tasks/tasks.json',
1,
'Update task',
false,
{ tag: 'master' },
'json'
)
).rejects.toThrow('Received invalid task object from AI');
});

test('handles generateObjectService with missing required fields', async () => {
fs.existsSync.mockReturnValue(true);
readJSON.mockReturnValue({
tag: 'master',
tasks: [
{
id: 1,
title: 'Task',
description: 'Description',
status: 'pending',
dependencies: [],
priority: 'medium',
details: null,
testStrategy: null,
subtasks: []
}
]
});

generateObjectService.mockResolvedValue({
mainResult: {
task: {
id: 1,
// Missing title and description
status: 'pending',
dependencies: [],
priority: 'medium'
}
},
telemetryData: {}
});

await expect(
updateTaskById(
'tasks/tasks.json',
1,
'Update task',
false,
{ tag: 'master' },
'json'
)
).rejects.toThrow('Updated task missing required fields');
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Excellent test coverage for generateObjectService migration!

The new test suite comprehensively addresses the previous review concerns:

  1. ✅ Verifies generateObjectService is called (not generateTextService) with correct arguments (lines 216-229)
  2. ✅ Tests successful extraction from mainResult.task with all required fields (lines 232-257)
  3. ✅ Handles malformed responses where task is null (lines 260-296)
  4. ✅ Validates required fields and throws appropriate errors (lines 298-340)

The tests follow Jest best practices:

  • Mocks properly declared before imports
  • Clear test descriptions
  • Appropriate assertions using toMatchObject, objectContaining, and toContain
  • Proper setup/teardown with beforeEach

Minor consideration: The visible tests only cover non-append mode (where generateObjectService is used). Verify that append mode (which still uses generateTextService per the PR description) is tested in this file or elsewhere. Based on learnings.


🏁 Script executed:

#!/bin/bash
# Description: Check if append mode is tested in this file or related test files

echo "=== Checking for append mode tests in update-task-by-id.test.js ==="
rg -n "append.*true|isAppend" tests/unit/scripts/modules/task-manager/update-task-by-id.test.js -C3

echo -e "\n=== Checking for generateTextService usage in tests ==="
rg -n "generateTextService" tests/unit/scripts/modules/task-manager/update-task-by-id.test.js -C3

echo -e "\n=== Searching for append mode tests in related files ==="
fd -t f "update-task.*test\.js" tests/ --exec rg -l "append.*true|isAppend" {}

Length of output: 696


Add tests for append mode (generateTextService path)
The current suite covers only non-append mode with generateObjectService; add at least one test with append=true to verify generateTextService is called with the correct arguments and its response is handled as expected.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/schemas/analyze-complexity.js (1)

3-9: Add basic string/int constraints to keep schema outputs usable.

Allowing empty strings or arbitrarily large subtask counts makes downstream prompts brittle. Mirror the other task-related schemas by trimming strings, enforcing minimum lengths, and capping recommendedSubtasks to a sane range (e.g., 0‑50) so bad AI payloads fail fast instead of leaking through.

Apply this diff:

 export const ComplexityAnalysisItemSchema = z.object({
 	taskId: z.number().int().positive(),
-	taskTitle: z.string(),
+	taskTitle: z.string().trim().min(5).max(200),
 	complexityScore: z.number().min(1).max(10),
-	recommendedSubtasks: z.number().int().nonnegative(),
-	expansionPrompt: z.string(),
-	reasoning: z.string()
+	recommendedSubtasks: z.number().int().min(0).max(50),
+	expansionPrompt: z.string().trim().min(10).max(2000),
+	reasoning: z.string().trim().min(10).max(2000)
 });
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1d197fe and 9ed3e83.

📒 Files selected for processing (12)
  • apps/cli/src/utils/auto-update.ts (1 hunks)
  • scripts/modules/task-manager/expand-task.js (4 hunks)
  • src/ai-providers/base-provider.js (2 hunks)
  • src/ai-providers/claude-code.js (1 hunks)
  • src/ai-providers/gemini-cli.js (2 hunks)
  • src/ai-providers/grok-cli.js (1 hunks)
  • src/prompts/analyze-complexity.json (1 hunks)
  • src/schemas/analyze-complexity.js (1 hunks)
  • tests/integration/cli/complex-cross-tag-scenarios.test.js (4 hunks)
  • tests/unit/prompts/expand-task-prompt.test.js (2 hunks)
  • tests/unit/prompts/prompt-migration.test.js (1 hunks)
  • tests/unit/scripts/modules/task-manager/expand-task.test.js (17 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/claude-code.js
  • src/ai-providers/grok-cli.js
  • src/ai-providers/gemini-cli.js
  • src/ai-providers/base-provider.js
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • src/ai-providers/claude-code.js
  • scripts/modules/task-manager/expand-task.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • src/ai-providers/grok-cli.js
  • src/ai-providers/gemini-cli.js
  • src/schemas/analyze-complexity.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • src/ai-providers/base-provider.js
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/expand-task.js
tests/{unit,integration,e2e,fixtures}/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)

**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/unit/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/{unit,integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.{test,spec}.*

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test files should follow naming conventions: .test., .spec., or _test. depending on the language

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/{unit,integration,e2e}/**

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible

Files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/integration/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

Properly mock session objects when required by functions, and reset environment variables between tests if modified.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
🧠 Learnings (26)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use AI to generate detailed subtasks within the current tag context, considering complexity analysis for subtask counts and ensuring proper IDs for newly created subtasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-10-01T19:53:34.261Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • src/ai-providers/base-provider.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Extract tasks from PRD documents using AI, create them in the current tag context (defaulting to 'master'), provide clear prompts to guide AI task generation, and validate/clean up AI-generated tasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: When breaking down complex tasks in Taskmaster, use the `expand_task` command with appropriate flags (`--num`, `--research`, `--force`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:13.815Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • src/prompts/analyze-complexity.json
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: When breaking down complex tasks, use the `expand_task` command with appropriate flags (`--force`, `--research`, `--num`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-31T22:07:49.716Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/commands.mdc:0-0
Timestamp: 2025-07-31T22:07:49.716Z
Learning: Applies to scripts/modules/commands.js : Follow the provided structure for removing subtasks, including options for conversion, file path, and regeneration, with detailed error handling.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:09:45.690Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dependencies.mdc:0-0
Timestamp: 2025-07-18T17:09:45.690Z
Learning: Applies to scripts/modules/dependency-manager.js : Remove references to non-existent tasks during validation

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.

Applied to files:

  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test core logic independently with both data formats, mock file system operations, test tag resolution behavior, and verify migration compatibility in unit tests.

Applied to files:

  • tests/unit/prompts/prompt-migration.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test CLI and MCP interfaces with real task data, verify end-to-end workflows across tag contexts, and test error scenarios and recovery in integration tests.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Verify modifications on the in-memory task objects passed to writeJSON.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Assign unique sequential IDs to tasks within each tag context by calculating the next ID based on existing tasks in the current tag; do not hardcode or reuse IDs within the same tag.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.

Applied to files:

  • src/ai-providers/base-provider.js
🧬 Code graph analysis (7)
scripts/modules/task-manager/expand-task.js (3)
scripts/modules/task-manager/scope-adjustment.js (1)
  • generatedSubtasks (380-380)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/expand-all-tasks.js (1)
  • logger (53-70)
tests/unit/prompts/prompt-migration.test.js (1)
tests/integration/cli/complex-cross-tag-scenarios.test.js (1)
  • __dirname (8-8)
tests/integration/cli/complex-cross-tag-scenarios.test.js (3)
scripts/modules/commands.js (1)
  • tasksPath (5155-5155)
mcp-server/src/core/direct-functions/move-task-cross-tag.js (1)
  • tasksPath (82-82)
scripts/modules/ui.js (1)
  • tasksPath (1792-1792)
tests/unit/prompts/expand-task-prompt.test.js (1)
scripts/modules/task-manager/scope-adjustment.js (1)
  • subtaskSchema (355-367)
tests/unit/scripts/modules/task-manager/expand-task.test.js (4)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (1)
  • generateObjectService (142-142)
tests/unit/scripts/modules/task-manager/add-task.test.js (1)
  • createMcpLogMock (217-223)
tests/unit/scripts/modules/task-manager/complexity-report-tag-isolation.test.js (1)
  • expandTask (485-487)
tests/unit/scripts/modules/task-manager/expand-all-tasks.test.js (1)
  • expandTask (51-53)
src/ai-providers/base-provider.js (2)
scripts/modules/ai-services-unified.js (1)
  • result (404-404)
scripts/modules/task-manager/parse-prd/parse-prd-streaming.js (1)
  • result (559-568)
apps/cli/src/utils/auto-update.ts (1)
tests/unit/ui.test.js (1)
  • chalk (17-17)
🔇 Additional comments (14)
tests/integration/cli/complex-cross-tag-scenarios.test.js (1)

333-333: LGTM! Comprehensive verification now addresses past review feedback.

The test updates properly implement the suggestions from the past review:

  1. Task count reduction (lines 333, 351-352, 362-363): Reduces total tasks from 100 to 50 while maintaining test coverage, likely improving performance.

  2. Comprehensive move verification (lines 384-399): Now verifies the entire dependency chain:

    • All 25 tasks (1-25) successfully moved from master to in-progress
    • All 25 original in-progress tasks (26-50) remain
    • Final count check confirms 50 total in-progress tasks
  3. Timing assertions removed (line 374): Comment indicates timing validation removed, aligning with PR objectives for deterministic tests.

This matches the verification pattern referenced in the past review comment and ensures the --with-dependencies flag works correctly for the entire chain.

Also applies to: 351-352, 362-363, 374-399

src/prompts/analyze-complexity.json (1)

47-47: Past review concern addressed: recommendedSubtasks now allows zero.

The prompt now explicitly permits recommendedSubtasks to be a "non-negative integer; 0 if no expansion needed", directly addressing the previous feedback that the "positive integer" constraint prevented legitimate zero values when tasks don't need expansion.

tests/unit/prompts/expand-task-prompt.test.js (2)

2-3: LGTM! Schema imports enable robust validation.

The imports of ExpandTaskResponseSchema and SubtaskSchema enable direct schema validation in tests, replacing the previous weak substring matching approach. This aligns with the centralized schema-driven validation pattern across the PR.


82-97: Excellent! Past review concern addressed with robust schema validation.

The test now validates the schema structure directly by inspecting SubtaskSchema.shape, exactly as suggested in the past review (Option 1). This eliminates false positives from substring matching and ensures the actual schema definition includes all required fields: id, title, description, dependencies, details, status, and testStrategy.

tests/unit/prompts/prompt-migration.test.js (1)

1-56: LGTM! Test enforces prompt migration standards.

This new test validates that prompt files have migrated away from JSON formatting instructions, aligning with the PR's shift to structured object generation. The implementation correctly addresses past review concerns:

  1. allowedContexts mapping fixed (lines 19-22): Now correctly maps banned phrases (lowercase) to their allowed contexts, fixing the past review issue where keys wouldn't match lowerPhrase.

  2. No unused variable (line 32): Reads file content directly without unnecessary JSON parsing, addressing the past review concern.

  3. Clear assertion logic (lines 47-50): When a banned phrase is found, explicitly checks if it appears in an allowed context with a descriptive error message.

The test properly excludes schema files and update-subtask.json (which returns plain strings), and enforces that prompts rely on structured schemas rather than manual JSON instructions.

tests/unit/scripts/modules/task-manager/expand-task.test.js (4)

68-119: LGTM! Mock correctly reflects generateObjectService response structure.

The mock now returns aiServiceResponse.mainResult.subtasks with properly structured subtask objects, matching the ExpandTaskResponseSchema and SubtaskSchema shapes. Each subtask includes all required fields (id, title, description, dependencies, details, status, testStrategy), and telemetry data is properly included. This aligns with the PR's migration to structured object generation.


216-218: LGTM! Tests correctly verify generateObjectService integration.

The tests properly import and assert generateObjectService usage:

  • Import updated to generateObjectService (lines 216-218)
  • Assertions verify service calls (lines 376, 499)
  • Research role parameter correctly validated (lines 461-469)

This aligns with the PR's migration from text-based to structured object generation across task-manager modules.

Also applies to: 376-376, 461-469, 499-499


664-695: LGTM! Append behavior test thoroughly validates object-based flow.

The test correctly verifies the append functionality with generateObjectService:

  1. Service call verification (lines 664-670): Asserts generateObjectService called with correct parameters including role: 'main', commandName: 'expand-task', and objectName: 'subtasks'.

  2. Append verification (lines 672-695): Validates that existing subtasks are preserved (lines 682-687) and new subtasks are appended (lines 690-695), resulting in 4 total subtasks (1 existing + 3 new).

  3. Complexity report integration (lines 760-761): Verifies service call parameters in complexity report scenario.

This thoroughly tests the structured object generation path and append behavior introduced in the PR.

Also applies to: 760-761


847-905: LGTM! Comprehensive error handling and edge case coverage.

The tests thoroughly cover error scenarios and edge cases for the generateObjectService integration:

  1. Service failure handling (lines 847-857): Verifies generateObjectService rejection is properly caught and propagated.

  2. Missing mainResult (lines 859-880): Tests the guard logic that validates aiServiceResponse.mainResult exists before accessing subtasks, expecting the error "AI response did not include a valid subtasks array."

  3. Invalid subtasks type (lines 882-905): Validates that non-array subtasks values are rejected with the same clear error message.

  4. Edge cases (lines 1007-1270): All tests updated to use generateObjectService and verify correct behavior for dynamic subtask generation (0 count, negative values, undefined/null handling).

These tests align with the guard logic added in expand-task.js and ensure robust error handling for the structured object generation path.

Also applies to: 1007-1270

scripts/modules/task-manager/expand-task.js (5)

1-30: LGTM! Duplicate imports resolved.

The import statements are now clean with no duplicates. Each symbol is imported exactly once:

  • getDebugFlag (line 23)
  • getDefaultSubtasks (line 22)
  • getPromptManager (line 26)
  • flattenTasksWithSubtasks (line 27)
  • findProjectRoot (line 27)

This addresses the past review concern about parse errors from duplicate imports.


18-19: LGTM! Imports and documentation updated for object-based generation.

The new imports enable the migration to structured AI responses:

  • COMMAND_SCHEMAS (line 18): Provides access to the centralized schema registry for validation
  • generateObjectService (line 19): Replaces text-based generation with structured object generation

The JSDoc comments (lines 32-49) accurately describe the updated behavior, including the switch to generateObjectService, sequential subtask ID numbering, and complexity report integration.

Also applies to: 32-49


200-204: LGTM! Object-based generation with robust error handling.

The implementation correctly migrates to structured object generation:

  1. Sequential subtask IDs (lines 200-204): Calculates nextSubtaskId based on existing subtasks count, ensuring sequential numbering (1, 2, 3...) that displays as X.1, X.2, X.3 format as noted in the PR objectives.

  2. generateObjectService integration (lines 309-319): Properly calls the service with:

    • Schema validation via COMMAND_SCHEMAS['expand-task'] (line 315)
    • objectName: 'subtasks' to match the schema structure (line 316)
    • Appropriate role, prompts, and telemetry parameters
  3. Guard logic (lines 321-327): Implements the exact pattern suggested in the past review, verifying mainResult exists and subtasks is an array before accessing it. The clear error message "AI response did not include a valid subtasks array." aids debugging.

This addresses the past review concern about missing mainResult handling and aligns with the PR's schema-driven validation approach.

Also applies to: 293-327


340-350: LGTM! Task update preserves append-by-default behavior.

The task update logic correctly implements the append-by-default behavior described in the function documentation:

  1. Array initialization (lines 341-343): Ensures task.subtasks is an array before appending.

  2. Append operation (line 345): Uses push(...generatedSubtasks) to append new subtasks, preserving existing ones unless the force flag was set earlier (lines 101-107).

  3. Tag-aware write (line 349): Properly passes the tag parameter to writeJSON, maintaining the tagged structure and preventing the corruption issues mentioned in earlier code comments.


352-375: LGTM! Telemetry and error handling properly implemented.

The final sections correctly handle telemetry display and error scenarios:

  1. CLI telemetry (lines 352-359): Displays AI usage summary only for CLI output format (outputFormat === 'text'), respecting the separation between CLI and MCP output modes.

  2. Return structure (lines 361-366): Returns an object with task, telemetryData, and tagInfo, matching the expected structure for both CLI and MCP consumers as noted in coding guidelines.

  3. Error handling (lines 367-375): Properly logs errors, includes debug mode stack traces for CLI, and re-throws for caller handling.

This aligns with the telemetry and error handling patterns across the task-manager modules in the PR.

@Crunchyman-ralph Crunchyman-ralph force-pushed the feat-generate-object-refactor-v3 branch from 9ed3e83 to f68330e Compare October 2, 2025 13:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/ai-providers/base-provider.js (1)

321-335: Validate repaired JSON against the schema

After repairing malformed JSON we return the raw parsed object without re-validating it. Downstream code relies on the schema guarantee, so this can surface structurally invalid data (e.g., missing required fields) despite the repair succeeding. Please re-run the object through the provided schema (when it exposes parse) before returning.

-					const parsed = JSON.parse(repairedJson);
+					const parsed = JSON.parse(repairedJson);
+					const validatedObject =
+						typeof params.schema?.parse === 'function'
+							? params.schema.parse(parsed)
+							: parsed;
...
-						object: parsed,
+						object: validatedObject,
♻️ Duplicate comments (2)
tests/integration/cli/complex-cross-tag-scenarios.test.js (1)

380-400: Verification comprehensively confirms the entire dependency chain was moved.

The test now properly verifies:

  1. All tasks in the dependency chain (1-25) were removed from master
  2. All tasks in the dependency chain (1-25) were added to in-progress
  3. Original in-progress tasks (26-50) remain intact
  4. Final count is correct (50 tasks total)

This directly addresses the previous review comment and mirrors the verification pattern from the "Complex Dependency Chains" test.

src/schemas/analyze-complexity.js (1)

3-14: Consider adding validation constraints for data quality.

The schema lacks string length constraints and bounds on numeric fields, which could allow edge cases like empty strings or unreasonably large subtask counts. While the current schema is functional, adding constraints would improve consistency with other schemas in the codebase (e.g., SubtaskSchema uses .min(5).max(200) on title fields).

Consider these optional enhancements:

 export const ComplexityAnalysisItemSchema = z.object({
 	taskId: z.number().int().positive(),
-	taskTitle: z.string(),
+	taskTitle: z.string().trim().min(1).max(200),
 	complexityScore: z.number().min(1).max(10),
-	recommendedSubtasks: z.number().int().nonnegative(),
+	recommendedSubtasks: z.number().int().min(0).max(50),
-	expansionPrompt: z.string(),
-	reasoning: z.string()
+	expansionPrompt: z.string().trim().min(10).max(2000),
+	reasoning: z.string().trim().min(10).max(2000)
 });

Benefits:

  • Prevents empty/whitespace-only strings
  • Caps recommendedSubtasks at a reasonable limit
  • Aligns validation patterns with other task schemas

Based on learnings

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ed3e83 and f68330e.

📒 Files selected for processing (12)
  • apps/cli/src/utils/auto-update.ts (1 hunks)
  • scripts/modules/task-manager/expand-task.js (4 hunks)
  • src/ai-providers/base-provider.js (3 hunks)
  • src/ai-providers/claude-code.js (1 hunks)
  • src/ai-providers/gemini-cli.js (2 hunks)
  • src/ai-providers/grok-cli.js (1 hunks)
  • src/prompts/analyze-complexity.json (1 hunks)
  • src/schemas/analyze-complexity.js (1 hunks)
  • tests/integration/cli/complex-cross-tag-scenarios.test.js (4 hunks)
  • tests/unit/prompts/expand-task-prompt.test.js (2 hunks)
  • tests/unit/prompts/prompt-migration.test.js (1 hunks)
  • tests/unit/scripts/modules/task-manager/expand-task.test.js (17 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
tests/{unit,integration,e2e,fixtures}/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)

**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/integration/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/{integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

Properly mock session objects when required by functions, and reset environment variables between tests if modified.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • src/ai-providers/claude-code.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • scripts/modules/task-manager/expand-task.js
  • src/ai-providers/base-provider.js
  • src/ai-providers/gemini-cli.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
  • src/ai-providers/grok-cli.js
  • src/schemas/analyze-complexity.js
**/*.{test,spec}.*

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test files should follow naming conventions: .test., .spec., or _test. depending on the language

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
tests/{unit,integration,e2e}/**

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible

Files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/claude-code.js
  • src/ai-providers/base-provider.js
  • src/ai-providers/gemini-cli.js
  • src/ai-providers/grok-cli.js
tests/unit/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.

Files:

  • tests/unit/prompts/expand-task-prompt.test.js
  • tests/unit/prompts/prompt-migration.test.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/expand-task.js
🧠 Learnings (25)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test CLI and MCP interfaces with real task data, verify end-to-end workflows across tag contexts, and test error scenarios and recovery in integration tests.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Assign unique sequential IDs to tasks within each tag context by calculating the next ID based on existing tasks in the current tag; do not hardcode or reuse IDs within the same tag.

Applied to files:

  • tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.

Applied to files:

  • tests/unit/prompts/prompt-migration.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test core logic independently with both data formats, mock file system operations, test tag resolution behavior, and verify migration compatibility in unit tests.

Applied to files:

  • tests/unit/prompts/prompt-migration.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use AI to generate detailed subtasks within the current tag context, considering complexity analysis for subtask counts and ensuring proper IDs for newly created subtasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-10-01T19:53:34.261Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Extract tasks from PRD documents using AI, create them in the current tag context (defaulting to 'master'), provide clear prompts to guide AI task generation, and validate/clean up AI-generated tasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: When breaking down complex tasks in Taskmaster, use the `expand_task` command with appropriate flags (`--num`, `--research`, `--force`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: When breaking down complex tasks, use the `expand_task` command with appropriate flags (`--force`, `--research`, `--num`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:13.815Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:09:45.690Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dependencies.mdc:0-0
Timestamp: 2025-07-18T17:09:45.690Z
Learning: Applies to scripts/modules/dependency-manager.js : Remove references to non-existent tasks during validation

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.

Applied to files:

  • src/ai-providers/base-provider.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Verify modifications on the in-memory task objects passed to writeJSON.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.

Applied to files:

  • tests/unit/scripts/modules/task-manager/expand-task.test.js
🧬 Code graph analysis (6)
apps/cli/src/utils/auto-update.ts (1)
tests/unit/ui.test.js (1)
  • chalk (17-17)
tests/integration/cli/complex-cross-tag-scenarios.test.js (3)
scripts/modules/commands.js (1)
  • tasksPath (5155-5155)
mcp-server/src/core/direct-functions/move-task-cross-tag.js (1)
  • tasksPath (82-82)
scripts/modules/ui.js (1)
  • tasksPath (1792-1792)
tests/unit/prompts/expand-task-prompt.test.js (1)
scripts/modules/task-manager/scope-adjustment.js (1)
  • subtaskSchema (355-367)
tests/unit/prompts/prompt-migration.test.js (1)
tests/integration/cli/complex-cross-tag-scenarios.test.js (1)
  • __dirname (8-8)
scripts/modules/task-manager/expand-task.js (3)
scripts/modules/task-manager/scope-adjustment.js (1)
  • generatedSubtasks (380-380)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)
scripts/modules/task-manager/expand-all-tasks.js (1)
  • logger (53-70)
tests/unit/scripts/modules/task-manager/expand-task.test.js (2)
tests/unit/scripts/modules/task-manager/update-task-by-id.test.js (1)
  • generateObjectService (142-142)
tests/unit/scripts/modules/task-manager/add-task.test.js (1)
  • createMcpLogMock (217-223)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (19)
apps/cli/src/utils/auto-update.ts (1)

161-172: LGTM! Past feedback addressed.

The changes correctly implement the requested improvements from previous reviews:

  • NODE_ENV === 'test' is now included in the skip conditions
  • The specific triggering condition is identified and logged
  • The log message clearly communicates why auto-update was skipped

The ternary chain correctly maps each condition to its corresponding reason string, making debugging straightforward.

tests/integration/cli/complex-cross-tag-scenarios.test.js (2)

333-373: LGTM! Test data setup is well-structured.

The test correctly creates 50 tasks across two tags with non-overlapping ID ranges (1-25 in master, 26-50 in in-progress). The sequential dependency chain in master (where each task depends on the previous one) properly tests the --with-dependencies flag functionality.


374-378: LGTM! Move command execution is correct.

The command correctly targets task 25 with the --with-dependencies flag, which should move the entire dependency chain (tasks 1-25). The removal of timing assertions makes the test more reliable and focused on correctness rather than performance.

tests/unit/prompts/expand-task-prompt.test.js (2)

2-3: LGTM: Schema imports support robust validation.

The imports of ExpandTaskResponseSchema and SubtaskSchema enable direct schema validation, which is far more reliable than substring matching in prompt content.


82-96: Excellent fix: Direct schema validation eliminates false positives.

This test now validates the actual schema structure instead of searching for field names in prompt text. The approach is robust and comprehensive, covering all required subtask fields. This directly addresses the previous review concern about weak substring matching.

tests/unit/scripts/modules/task-manager/expand-task.test.js (8)

68-118: LGTM: Mock properly reflects generateObjectService response structure.

The mock correctly returns a structured response with mainResult.subtasks array and telemetryData, matching the expected shape from generateObjectService. All required subtask fields are present.


216-218: LGTM: Correct service import.

Import correctly references generateObjectService from the unified AI services module, consistent with the PR's migration approach.


376-376: LGTM: Service call verification.

The test correctly verifies that generateObjectService was called, which is appropriate for this integration test.


461-466: LGTM: Research role verification.

Test properly verifies that the research flag results in role: 'research' being passed to the service, ensuring correct AI provider selection.


663-695: LGTM: Comprehensive append behavior test.

This test thoroughly verifies the append functionality, including:

  • Correct generateObjectService call with objectName: 'subtasks'
  • Preservation of existing subtasks
  • Proper appending of new subtasks
  • Correct total count

759-761: LGTM: Complexity report integration verification.

Test correctly verifies that complexity report data influences the system prompt passed to generateObjectService.


847-905: LGTM: Comprehensive error handling tests.

These tests provide excellent coverage of error scenarios:

  • AI service failures are properly propagated
  • Missing mainResult is detected and results in clear error
  • Invalid subtasks type is caught with descriptive message
  • All error cases properly prevent file writes

The error messages are user-friendly and align with the guard added in the source code.


1007-1007: LGTM: Consistent service call verification across test cases.

All tests properly verify generateObjectService calls across various scenarios (empty context, dynamic generation, specific counts, defaults), ensuring comprehensive coverage.

Also applies to: 1140-1140, 1162-1162, 1189-1189, 1217-1217, 1243-1243, 1269-1269

scripts/modules/task-manager/expand-task.js (6)

5-10: LGTM: Import organization is clean.

All imports are properly consolidated with no duplicates. The parse errors from duplicate imports mentioned in previous reviews appear to be resolved.

Also applies to: 13-16, 22-27


18-19: LGTM: Correct imports for schema-driven AI responses.

The imports of COMMAND_SCHEMAS and generateObjectService properly support the migration to structured object generation with centralized schema validation.

Based on learnings


32-33: LGTM: Documentation updated to reflect implementation.

The JSDoc comment correctly describes the use of generateObjectService, keeping documentation in sync with code.


200-204: LGTM: Clear subtask ID calculation with helpful comments.

The nextSubtaskId calculation properly handles existing subtasks and uses safe optional chaining. The comment clearly explains the sequential numbering scheme (1, 2, 3...) and display format (X.1, X.2, X.3).


309-319: LGTM: Correct generateObjectService invocation.

The service call properly includes all required parameters:

  • Prompts (user and system)
  • Role for provider selection
  • Schema via COMMAND_SCHEMAS['expand-task']
  • Object name matching response structure
  • Telemetry parameters (commandName, outputType)

Based on learnings


321-327: LGTM: Robust guard against malformed AI responses.

This validation guard properly addresses the previous review concern about potential undefined access. It:

  • Safely checks for mainResult existence
  • Validates subtasks is an array
  • Provides clear error message for debugging
  • Prevents downstream errors from invalid data

This fix directly resolves the previous critical issue.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f68330e and 3aac7ac.

📒 Files selected for processing (1)
  • scripts/modules/task-manager/expand-task.js (4 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/expand-task.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/expand-task.js
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • scripts/modules/task-manager/expand-task.js
🧠 Learnings (11)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use AI to generate detailed subtasks within the current tag context, considering complexity analysis for subtask counts and ensuring proper IDs for newly created subtasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-10-01T19:53:34.261Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1262
File: scripts/modules/task-manager/update-tasks.js:216-233
Timestamp: 2025-10-01T19:53:34.261Z
Learning: For scripts/modules/task-manager/*.js: Use generateObjectService with Zod schemas for structured AI responses rather than generateTextService + manual JSON parsing, as modern AI providers increasingly support the tool use and generateObject paradigm with improved reliability.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Extract tasks from PRD documents using AI, create them in the current tag context (defaulting to 'master'), provide clear prompts to guide AI task generation, and validate/clean up AI-generated tasks.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: When breaking down complex tasks in Taskmaster, use the `expand_task` command with appropriate flags (`--num`, `--research`, `--force`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: When breaking down complex tasks, use the `expand_task` command with appropriate flags (`--force`, `--research`, `--num`, `--prompt`) and review generated subtasks for accuracy.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:13.815Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/context_gathering.mdc:0-0
Timestamp: 2025-07-18T17:09:13.815Z
Learning: Commands such as `analyze-complexity`, `expand-task`, `update-task`, and `add-task` should consider adopting the context gathering pattern for improved AI-powered assistance.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:09:45.690Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dependencies.mdc:0-0
Timestamp: 2025-07-18T17:09:45.690Z
Learning: Applies to scripts/modules/dependency-manager.js : Remove references to non-existent tasks during validation

Applied to files:

  • scripts/modules/task-manager/expand-task.js
📚 Learning: 2025-07-18T17:14:54.131Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/telemetry.mdc:0-0
Timestamp: 2025-07-18T17:14:54.131Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.

Applied to files:

  • scripts/modules/task-manager/expand-task.js
🧬 Code graph analysis (1)
scripts/modules/task-manager/expand-task.js (2)
scripts/modules/task-manager/scope-adjustment.js (1)
  • generatedSubtasks (380-380)
src/schemas/registry.js (2)
  • COMMAND_SCHEMAS (9-17)
  • COMMAND_SCHEMAS (9-17)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@Crunchyman-ralph Crunchyman-ralph merged commit 738ec51 into next Oct 2, 2025
5 checks passed
@github-actions github-actions bot mentioned this pull request Oct 7, 2025
This was referenced Nov 3, 2025
sfc-gh-dflippo pushed a commit to sfc-gh-dflippo/task-master-ai that referenced this pull request Dec 4, 2025
…es (eyaltoledano#1262)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Ben Vargas <ben@example.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant