feat: Add Codex CLI provider with OAuth authentication#1273
feat: Add Codex CLI provider with OAuth authentication#1273Crunchyman-ralph merged 19 commits intoeyaltoledano:nextfrom
Conversation
🦋 Changeset detectedLatest commit: eaf5325 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughAdds a new Codex CLI provider and related wiring: provider implementation, export, constant, supported models, config-manager getters/validation, CLI flag, tests, docs, and dependency. Introduces supportsTemperature flag on BaseAIProvider and marks several CLI providers as not supporting temperature. Minor Supabase Zod error-field change. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant CLI as models command
participant CFG as Config Manager
participant TM as Task Model Selector
participant P as CodexCliProvider
participant SDK as ai-sdk-provider-codex-cli
U->>CLI: task-master models --set-main=gpt-5-codex --codex-cli
CLI->>CFG: getCodexCliSettingsForCommand("models")
CFG-->>CLI: codexCli settings
CLI->>TM: setModel(main, "gpt-5-codex", providerHint="codex-cli")
TM->>TM: validate model against `supported-models` (codex-cli)
TM-->>CLI: modelData { provider:"codex-cli", modelId:"gpt-5-codex" }
CLI->>P: getClient({ commandName:"models" })
P->>CFG: getCodexCliSettingsForCommand("models")
CFG-->>P: merged settings
P->>SDK: createCodexCli(defaultSettings)
SDK-->>P: client
P-->>CLI: client ready
CLI-->>U: Confirmation with provider codex-cli
sequenceDiagram
autonumber
participant APP as Runtime
participant P as BaseAIProvider (CLI impl)
participant C as CLI Client
note right of P: supportsTemperature = false
APP->>P: generateText({ modelId, temperature: 0.7, ... })
P->>P: prepare params (omit temperature)
P->>C: generate({ model: modelId, ... })
C-->>P: result
P-->>APP: result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
|
Draft... not quite ready yet for review. |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/modules/config-manager.js (1)
792-814: Remove redundant codex-cli branch?Now that
codex-clilives inprovidersWithoutApiKeys, the explicitif (providerName?.toLowerCase() === 'codex-cli') return true;looks redundant. Dropping it would keep the guardrail list as the single source of truth.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (17)
README.md(1 hunks)docs/command-reference.md(1 hunks)docs/configuration.md(1 hunks)docs/examples/codex-cli-usage.md(1 hunks)docs/providers/codex-cli.md(1 hunks)package.json(1 hunks)scripts/modules/ai-services-unified.js(2 hunks)scripts/modules/commands.js(6 hunks)scripts/modules/config-manager.js(11 hunks)scripts/modules/supported-models.json(1 hunks)scripts/modules/task-manager/models.js(2 hunks)src/ai-providers/codex-cli.js(1 hunks)src/ai-providers/index.js(1 hunks)src/constants/providers.js(1 hunks)tests/unit/ai-providers/codex-cli.test.js(1 hunks)tests/unit/ai-services-unified.test.js(1 hunks)tests/unit/config-manager.test.js(4 hunks)
🧰 Additional context used
📓 Path-based instructions (24)
scripts/modules/ai-services-unified.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
scripts/modules/ai-services-unified.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).The telemetryData object returned by ai-services-unified.js must include the fields: timestamp, userId, commandName, modelUsed, providerName, inputTokens, outputTokens, totalTokens, totalCost, and currency.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).
scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...
Files:
scripts/modules/ai-services-unified.jsscripts/modules/commands.jsscripts/modules/config-manager.js
scripts/modules/**
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
When using the MCP server, restart it if core logic in
scripts/modulesor MCP tool/direct function definitions change.
Files:
scripts/modules/ai-services-unified.jsscripts/modules/task-manager/models.jsscripts/modules/supported-models.jsonscripts/modules/commands.jsscripts/modules/config-manager.js
scripts/modules/ai-services*.js
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Ensure AI calls correctly handle and propagate telemetryData as described in 'telemetry.mdc'.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag
Files:
scripts/modules/ai-services-unified.jsscripts/modules/supported-models.jsonscripts/modules/commands.jsscripts/modules/config-manager.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
scripts/modules/ai-services-unified.jstests/unit/config-manager.test.jssrc/constants/providers.jsscripts/modules/task-manager/models.jssrc/ai-providers/index.jsscripts/modules/commands.jsscripts/modules/config-manager.jssrc/ai-providers/codex-cli.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
tests/unit/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module should have a corresponding unit test file in tests/unit/ that reflects the module structure (one test file per module).
Files:
tests/unit/config-manager.test.jstests/unit/ai-services-unified.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
docs/**/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.
Files:
docs/providers/codex-cli.mddocs/configuration.mddocs/examples/codex-cli-usage.mddocs/command-reference.md
package.json
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Add and update test scripts in package.json to include test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci
Files:
package.json
scripts/modules/task-manager/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)
scripts/modules/task-manager/*.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).
Determine the appropriaterole(main,research,fallback) in your core logic and pass it to the service.
Pass thesessionobject (received in thecontextparameter, especially from direct function wrappers) to the service call when in MCP context.
UsegenerateTextServiceand implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, asgenerateObjectServicehas shown unreliability with some providers/schemas.
Be aware of potential reliability issues withgenerateObjectServiceacross different providers and complex schemas. PrefergenerateTextService+ manual parsing as a more robust alternative for structured data needs.Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.
Files:
scripts/modules/task-manager/models.js
scripts/modules/task-manager/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)
Files:
scripts/modules/task-manager/models.js
scripts/modules/task-manager/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)
scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.Do not call AI-specific getters (like
getMainModelId,getMainMaxTokens) from core logic functions inscripts/modules/task-manager/*; instead, pass theroleto the unified AI service.
Files:
scripts/modules/task-manager/models.js
scripts/modules/supported-models.json
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Files:
scripts/modules/supported-models.json
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Files:
src/ai-providers/index.jssrc/ai-providers/codex-cli.js
scripts/modules/commands.js
📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)
scripts/modules/commands.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).
Determine the appropriaterole(main,research,fallback) in your core logic and pass it to the service.
Pass thesessionobject (received in thecontextparameter, especially from direct function wrappers) to the service call when in MCP context.
UsegenerateTextServiceand implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, asgenerateObjectServicehas shown unreliability with some providers/schemas.
Be aware of potential reliability issues withgenerateObjectServiceacross different providers and complex schemas. PrefergenerateTextService+ manual parsing as a more robust alternative for structured data needs.
scripts/modules/commands.js: All new user-facing commands should be added to 'scripts/modules/commands.js'.
Use consistent patterns for option naming and help text in CLI commands.
Follow the Commander.js model for subcommand structure in CLI commands.
When using callbacks (like in Commander.js commands), define them separately to allow testing the callback logic independently.
Add help text to the command definition and update 'dev_workflow.mdc' with command reference when adding a new feature.
Follow the established pattern in 'commands.js' for CLI command implementation, using Commander.js for argument parsing, including comprehensive help text and examples, and supporting tagged task context awareness.
Provide clear error messages for common failu...
Files:
scripts/modules/commands.js
scripts/modules/config-manager.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
scripts/modules/config-manager.js: Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.
scripts/modules/config-manager.js: Import and use specific getters fromscripts/modules/config-manager.jsto access configuration values needed for application logic; pass theexplicitRootparameter to getters if calling from MCP direct functions.
UseisApiKeySet(providerName, session)fromconfig-manager.jsto check if a provider's key is available before attempting an AI call.
Handle potentialConfigurationErrorif the.taskmasterconfigfile is missing or invalid when accessed viagetConfig.
Files:
scripts/modules/config-manager.js
tests/unit/ai-providers/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Create unit tests for the new provider in tests/unit/ai-providers/.test.js, mocking @ai-sdk/ and core ai module functions, and testing all exported functions for correct behavior and error handling.
Files:
tests/unit/ai-providers/codex-cli.test.js
🧠 Learnings (16)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/ai-services-unified.js : Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
Applied to files:
src/constants/providers.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Update relevant documentation files (such as README.md) to mention the new provider and its configuration.
Applied to files:
docs/configuration.md
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Applied to files:
src/ai-providers/index.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
src/ai-providers/index.jssrc/ai-providers/codex-cli.jstests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .taskmaster/config.json : All other Taskmaster settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in .taskmaster/config.json via the task-master models command or models MCP tool.
Applied to files:
scripts/modules/commands.jsdocs/command-reference.md
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
Applied to files:
scripts/modules/commands.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
tests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/unit/ai-providers/codex-cli.test.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Mock the action handlers for CLI commands and verify they're called with correct arguments.
Applied to files:
tests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Applied to files:
tests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-09-29T13:33:46.952Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.952Z
Learning: Claude Code provider should use CLAUDE_CODE_API_KEY environment variable instead of automatically picking up ANTHROPIC_API_KEY to avoid conflicts, since the ai-sdk-provider-claude-code package follows standard Anthropic patterns and prioritizes API keys over OAuth authentication, which can cause unexpected API charges.
Applied to files:
README.md
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
Applied to files:
docs/command-reference.md
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.
Applied to files:
tests/unit/ai-services-unified.test.js
🧬 Code graph analysis (7)
scripts/modules/ai-services-unified.js (1)
src/ai-providers/codex-cli.js (1)
CodexCliProvider(18-93)
tests/unit/config-manager.test.js (1)
tests/unit/config-manager.test.mjs (1)
DEFAULT_CONFIG(61-90)
scripts/modules/task-manager/models.js (1)
src/constants/providers.js (2)
CUSTOM_PROVIDERS(18-29)CUSTOM_PROVIDERS(18-29)
scripts/modules/commands.js (1)
scripts/modules/task-manager/models.js (4)
options(149-149)options(277-277)options(368-368)options(656-656)
scripts/modules/config-manager.js (2)
src/constants/commands.js (2)
AI_COMMAND_NAMES(7-17)AI_COMMAND_NAMES(7-17)src/constants/providers.js (2)
CUSTOM_PROVIDERS(18-29)CUSTOM_PROVIDERS(18-29)
src/ai-providers/codex-cli.js (2)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-362)scripts/modules/config-manager.js (3)
settings(445-445)settings(455-455)settings(471-471)
tests/unit/ai-providers/codex-cli.test.js (1)
src/ai-providers/codex-cli.js (1)
CodexCliProvider(18-93)
🪛 markdownlint-cli2 (0.18.1)
docs/providers/codex-cli.md
92-92: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
347-347: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
352-352: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
357-357: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
371-371: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
376-376: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
382-382: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
397-397: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
402-402: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
407-407: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
409-409: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
424-424: Bare URL used
(MD034, no-bare-urls)
441-441: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
451-451: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
463-463: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
472-472: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
481-481: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
docs/configuration.md
443-443: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
448-448: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
450-450: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
455-455: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
458-458: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
462-462: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
518-518: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
540-540: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🔇 Additional comments (13)
README.md (1)
91-93: LGTM! Documentation accurately reflects Codex CLI OAuth support.The updates correctly position Codex CLI alongside Claude Code as an alternative authentication method that doesn't require API keys, maintaining consistency with the existing documentation pattern.
src/ai-providers/index.js (1)
20-20: LGTM! Export follows established pattern.The CodexCliProvider export is consistent with existing provider exports and enables proper integration with the unified AI services layer.
src/constants/providers.js (1)
27-28: LGTM! Provider constant follows established pattern.The CODEX_CLI constant addition is consistent with existing custom provider definitions and enables proper provider identification across the codebase.
scripts/modules/ai-services-unified.js (2)
44-44: LGTM! Import follows established pattern.The CodexCliProvider import is consistent with existing provider imports and properly sources from the central provider index.
74-74: LGTM! Provider registration follows established pattern.The 'codex-cli' provider registration is consistent with existing providers and enables proper integration with the unified service runner.
docs/command-reference.md (1)
386-390: LGTM! Documentation examples are clear and consistent.The Codex CLI model configuration examples follow the established pattern and properly demonstrate both available models (gpt-5 and gpt-5-codex) with appropriate OAuth usage notes.
tests/unit/config-manager.test.js (2)
152-152: LGTM! Test configuration updated to include Codex CLI.The addition of
codexCli: {}to DEFAULT_CONFIG ensures tests accurately reflect the new configuration structure.
647-647: LGTM! Test expectations properly updated.The expectedMergedConfig updates across all test scenarios correctly include the codexCli section, maintaining test accuracy and consistency with the new configuration structure.
Also applies to: 691-691, 801-801
package.json (1)
74-74: Verify ai-sdk-provider-codex-cli compatibility with AI SDK v5
ai-sdk-provider-codex-cli@^0.3.0is only tagged aslatest(noai-sdk-v5tag) and doesn’t declareaias a peerDependency; confirm compatibility withai@^5.0.51via its docs or integration tests.scripts/modules/commands.js (1)
3586-3741: Codex CLI flag handled cleanly.Nice job threading
--codex-clithrough the models command: it mirrors the existing provider flags, participates in mutual-exclusion checks, and flows correctly into theproviderHintmapping. This keeps the command surface predictable.scripts/modules/task-manager/models.js (1)
542-573: Parity with other CLI providers looks solid.The codex-cli branch mirrors the existing Gemini CLI logic, including the warning path when the model is missing from the curated list. That keeps the configuration UX consistent and predictable.
src/ai-providers/codex-cli.js (1)
36-91: Provider wiring is on point.Appreciate the cached availability check and the focused ENOENT handling—this keeps the diagnostics actionable while letting OAuth-first flows proceed. The settings merge is tight, too.
scripts/modules/config-manager.js (1)
374-448: Codex CLI settings pipeline fits right in.The Zod schema mirrors the CLI surface area nicely, and reusing the command-specific merge pattern keeps behavior aligned with Claude/Grok CLI. Thanks for maintaining consistency across providers.
Post-Review Fixes and EnhancementsSeveral critical fixes and enhancements were added during code review: Critical FixesProvider initialization fix (
Temperature parameter handling (
Feature EnhancementsResearch role support (
Test ImprovementsIntegration tests (
Mock consistency (
Release ManagementChangeset (
All fixes have been validated with formatting checks and unit tests. The provider now works correctly in all scenarios and follows Task Master's patterns for CLI-backed providers. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
scripts/modules/supported-models.json (1)
72-89: Token limits need verification (per previous review).The codex-cli section correctly follows the provider pattern with null pricing and support for all three roles. However, as flagged in a previous review comment, the token limits may be inaccurate:
Per previous review findings:
- Current
max_tokens: 128000may not match actual Codex CLI limits- Evidence suggests context_window: 200K and max_output: 100K
- Should add
context_windowfield to document input limits separatelyPlease see the existing review comment on this file (lines 72-89) for detailed verification steps and proposed fixes.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
.changeset/brave-lions-sing.md(1 hunks)docs/providers/codex-cli.md(1 hunks)scripts/modules/ai-services-unified.js(3 hunks)scripts/modules/supported-models.json(3 hunks)src/ai-providers/codex-cli.js(1 hunks)tests/unit/ai-services-unified.test.js(4 hunks)
🧰 Additional context used
📓 Path-based instructions (19)
scripts/modules/ai-services-unified.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
scripts/modules/ai-services-unified.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).The telemetryData object returned by ai-services-unified.js must include the fields: timestamp, userId, commandName, modelUsed, providerName, inputTokens, outputTokens, totalTokens, totalCost, and currency.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).
scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...
Files:
scripts/modules/ai-services-unified.js
scripts/modules/**
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
When using the MCP server, restart it if core logic in
scripts/modulesor MCP tool/direct function definitions change.
Files:
scripts/modules/ai-services-unified.jsscripts/modules/supported-models.json
scripts/modules/ai-services*.js
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Ensure AI calls correctly handle and propagate telemetryData as described in 'telemetry.mdc'.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag
Files:
scripts/modules/ai-services-unified.jsscripts/modules/supported-models.json
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
scripts/modules/ai-services-unified.jssrc/ai-providers/codex-cli.jstests/unit/ai-services-unified.test.js
.changeset/*.md
📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)
.changeset/*.md: When runningnpm run changesetornpx changeset add, provide a concise summary of the changes for theCHANGELOG.mdin imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.
Files:
.changeset/brave-lions-sing.md
.changeset/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.
Files:
.changeset/brave-lions-sing.md
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Files:
src/ai-providers/codex-cli.js
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/unit/ai-services-unified.test.js
tests/unit/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module should have a corresponding unit test file in tests/unit/ that reflects the module structure (one test file per module).
Files:
tests/unit/ai-services-unified.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/unit/ai-services-unified.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/unit/ai-services-unified.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/unit/ai-services-unified.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/unit/ai-services-unified.test.js
docs/**/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.
Files:
docs/providers/codex-cli.md
scripts/modules/supported-models.json
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Files:
scripts/modules/supported-models.json
🧠 Learnings (7)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/ai-services-unified.js : Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
src/ai-providers/codex-cli.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.
Applied to files:
scripts/modules/supported-models.json
🧬 Code graph analysis (3)
scripts/modules/ai-services-unified.js (2)
src/ai-providers/codex-cli.js (1)
CodexCliProvider(18-103)scripts/modules/config-manager.js (1)
providerName(704-704)
src/ai-providers/codex-cli.js (2)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-362)scripts/modules/config-manager.js (3)
settings(445-445)settings(455-455)settings(471-471)
tests/unit/ai-services-unified.test.js (1)
scripts/modules/ai-services-unified.js (1)
result(430-430)
🪛 markdownlint-cli2 (0.18.1)
.changeset/brave-lions-sing.md
5-5: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
docs/providers/codex-cli.md
92-92: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
349-349: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
354-354: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
359-359: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
373-373: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
378-378: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
384-384: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
399-399: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
404-404: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
409-409: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
411-411: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
426-426: Bare URL used
(MD034, no-bare-urls)
443-443: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
453-453: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
465-465: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
474-474: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
483-483: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🔇 Additional comments (14)
src/ai-providers/codex-cli.js (6)
1-17: LGTM!The imports and module-level cache variables are properly structured:
- All required dependencies are correctly imported following the provider pattern
- Module-level cache variables are declared at the top to avoid hoisting issues
- The file header clearly documents OAuth-first authentication with optional API key support
18-26: LGTM!The constructor properly initializes the Codex CLI provider:
- Correctly invokes the base class constructor
- Sets appropriate provider metadata (name, JSON schema support, supported models)
- Restricts models to OAuth subscription offerings (gpt-5 and gpt-5-codex)
32-34: LGTM!The method correctly implements OAuth-first authentication by returning
false, allowing the provider to operate without an API key when using OAuth credentials.
42-44: LGTM!The method correctly implements the required interface by returning
'OPENAI_API_KEY'. Combined withisRequiredApiKey()returningfalse, this enables optional API key injection while satisfying the base class contract.
49-66: LGTM!The validation implementation provides helpful guidance without blocking execution:
- Correctly skips validation in test mode to avoid side effects
- Uses cached checking to minimize overhead
- Provides clear installation/configuration guidance via warning logs
- Includes appropriate timeout (1000ms) to prevent hanging
- Overrides base class validation to allow OAuth-first authentication
75-102: LGTM!The client factory implementation correctly handles OAuth-first authentication with optional API key support:
- Retrieves command-specific settings from config-manager for fine-grained control
- Only injects API key when explicitly provided (preserving OAuth-first approach)
- Provides helpful error messages for CLI unavailability
- Uses consistent error handling via
this.handleError.changeset/brave-lions-sing.md (1)
1-11: LGTM!The changeset correctly documents the feature addition:
- Uses appropriate semantic versioning (minor for new feature)
- Provides concise, user-facing description of changes
- Bullet points clearly highlight key capabilities
Note: The markdownlint MD041 warning about missing top-level heading is a false positive for
.changesetfiles, which have their own specific format.scripts/modules/ai-services-unified.js (2)
44-44: LGTM!The Codex CLI provider is correctly integrated:
- Import added to the provider class list
- Registry entry follows the established pattern for CLI-backed providers
- Provider instance properly initialized
Also applies to: 74-74
666-680: LGTM!The conditional temperature handling correctly prevents SDK warnings:
- Identifies CLI-backed providers (codex-cli, claude-code) that don't support temperature
- Only includes temperature parameter when both provider supports it and value is numeric
- Uses clean conditional spread pattern to avoid polluting callParams with unsupported options
This aligns with the critical fix described in the PR summary.
scripts/modules/supported-models.json (1)
10-10: LGTM!The addition of
"research"role to both Claude Sonnet 4 and Claude Opus 4 models enables them for research-oriented workflows, aligning with the PR's feature enhancements.Also applies to: 21-21
tests/unit/ai-services-unified.test.js (4)
161-177: LGTM!The mock provider instances are correctly structured:
- Implement all required provider methods (generateText, streamText, generateObject)
- Correctly return appropriate API key names via
getRequiredApiKeyName- Properly indicate optional API keys via
isRequiredApiKeyreturningfalse- Follow the same pattern as other CLI-backed provider mocks
125-125: LGTM!The
providersWithoutApiKeyslist correctly includes'codex-cli', ensuring test behavior matches production where Codex CLI operates without required API keys (OAuth-first).This addresses the concern raised in the previous review.
234-234: LGTM!The provider mock wiring correctly maps provider classes to their respective mock instances, following the established pattern for other providers in the test suite.
Also applies to: 242-242
826-932: LGTM!The new test cases comprehensively cover Codex CLI and Claude Code integration:
OAuth authentication test (827-862):
- Validates OAuth-only path (no API key required)
- Verifies correct model ID and max tokens
- Confirms null API key is passed
Optional API key test (864-898):
- Validates API key injection when provided
- Confirms API key propagates to provider call
Temperature omission test (901-932):
- Validates CLI providers correctly exclude unsupported temperature parameter
- Confirms other parameters (maxTokens) are still included
All tests follow established patterns with proper mocking and assertions.
|
@Crunchyman-ralph - CI / Typecheck failure is outside the purview of this PR (unrelated); I assume you've got work ongoing and will fix. |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
docs/providers/codex-cli.md(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
docs/**/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.
Files:
docs/providers/codex-cli.md
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
🪛 markdownlint-cli2 (0.18.1)
docs/providers/codex-cli.md
92-92: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
349-349: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
354-354: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
359-359: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
373-373: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
378-378: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
384-384: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
399-399: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
404-404: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
409-409: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
411-411: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
443-443: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
453-453: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
465-465: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
474-474: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
483-483: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
Crunchyman-ralph
left a comment
There was a problem hiding this comment.
nice stuff, lets make sure we're adding a section in our docs (apps/docs) as well ?
Also a couple of questions that I would appreciate answers to
| const lowerProvider = providerName?.toLowerCase(); | ||
| const providerSupportsTemperature = ![ | ||
| 'codex-cli', | ||
| 'claude-code' | ||
| ].includes(lowerProvider); | ||
|
|
||
| const callParams = { | ||
| apiKey, | ||
| modelId, | ||
| maxTokens: roleParams.maxTokens, | ||
| temperature: roleParams.temperature, | ||
| ...(providerSupportsTemperature && | ||
| typeof roleParams.temperature === 'number' | ||
| ? { temperature: roleParams.temperature } | ||
| : {}), |
There was a problem hiding this comment.
lets make sure this type of logic is handled inside the provider ? maybe add to the baseProvider ? and then apply inside the class of the children ? trying to reduce the logic inside this module as much as I can
src/ai-providers/codex-cli.js
Outdated
| let _codexCliChecked = false; | ||
| let _codexCliAvailable = null; |
There was a problem hiding this comment.
why not add it as a const inside the class ? that way we can reference it with this._codexCliChecked ?
| getRequiredApiKeyName() { | ||
| return 'OPENAI_API_KEY'; | ||
| } |
There was a problem hiding this comment.
are we sure we want to make it "openai_api_key" ? I would rather distinguish it like we did with grok-cli.js, with a separate env variable name for it
There was a problem hiding this comment.
it doesn't matter what is returned, it's just required downstream - if you're thinking "so we don't get into a situation like claude code with env variable frustration" the true cause of that issue is claude code's precedence, not what we do here. this can be whatever we want in reality, its a garbage variable but required downstream
There was a problem hiding this comment.
I'm more thinking of separating concerns between the api key required for the openai ai provider and the api key required for the codex cli ai provider, since they are separate providers, they deserve separate env variables is my reasoning behind this at the moment.
Integrates ai-sdk-provider-codex-cli (v0.3.0) to enable Task Master usage with OpenAI's Codex CLI via ChatGPT subscription OAuth authentication. Key Features: - OAuth-first authentication via `codex login` (no API key required) - Optional OPENAI_API_KEY support for backward compatibility - Native JSON schema support (no explicit schema mode needed) - Support for gpt-5 and gpt-5-codex models (272K input / 128K output) - Codebase analysis capabilities automatically enabled - Command-specific settings override support - Approval and sandbox mode configuration Code Changes: - Add CodexCliProvider class extending BaseAIProvider - Implement OAuth-first authentication with conditional API key injection - Add codexCli configuration section with Zod validation - Register provider in CUSTOM_PROVIDERS and model mapping - Add --codex-cli CLI flag support for model selection - Update hasCodebaseAnalysis check to include codex-cli - Add gpt-5 and gpt-5-codex to supported-models.json Documentation: - Add comprehensive Codex CLI provider guide (500+ lines) - Add practical usage examples with 9 scenarios - Update configuration.md with complete settings reference - Update command-reference.md with CLI flag examples - Update README.md with Codex CLI in providers list - Document OAuth authentication flow and troubleshooting Testing: - Add unit tests for CodexCliProvider - Add config-manager tests for Codex CLI settings validation - Verify all formatting and code style checks pass This implementation follows the ClaudeCodeProvider pattern for simplicity and OAuth-first architecture, allowing users to leverage ChatGPT subscription models without managing API keys.
Adds getRequiredApiKeyName() method to prevent 'must be implemented by provider' error during API key resolution in ai-services-unified.js. Even though Codex CLI uses OAuth-first authentication and doesn't require an API key, the provider resolution flow calls getRequiredApiKeyName() during initialization. The base class implementation throws if not overridden, causing all Codex CLI operations to fail before reaching the CLI. Returns 'OPENAI_API_KEY' to satisfy the resolver while maintaining optional API key support for backward compatibility.
Ensures test mock stays in sync with production configuration where codex-cli is treated as a provider without required API keys. Without this update, the mocked config-manager would incorrectly enforce API key requirements for codex-cli in tests, diverging from real behavior where codex-cli uses OAuth-first authentication with optional API key support.
Refactors CodexCliProvider mock to use a dedicated constant for consistency with other provider mocks (e.g., mockOllamaProvider, mockAnthropicProvider). Improves test maintainability by making the mock reusable and following the established pattern in the test file.
Adds comprehensive tests for codex-cli provider in the unified AI service layer, verifying both OAuth-first and optional API key authentication paths. Tests added: - OAuth authentication: Verifies codex-cli works without API key (null) - API key authentication: Verifies optional OPENAI_API_KEY is passed when provided via environment These tests ensure the provider correctly integrates with ai-services-unified.js and handles both authentication scenarios as designed.
Add research role support for GPT-5 and GPT-5-Codex models due to their built-in web search capabilities. Changes: - Added 'research' to allowed_roles for gpt-5 and gpt-5-codex in supported-models.json - Updated docs/providers/codex-cli.md to document research capabilities - Models can now be used for research tasks via --set-research flag Both GPT-5 models support web fetch tools, making them suitable for research operations that require real-time information retrieval.
Both CLI-backed providers (codex-cli and claude-code) do not support the temperature parameter and emit warnings when it's sent. Changes: - Modified ai-services-unified.js to skip temperature for both providers - Added Claude Code test verifying temperature is omitted - Existing Codex CLI tests already verify correct behavior This eliminates 'temperature is not supported' warnings while preserving full functionality for all other providers.
Wrap platform.openai.com URL in angle brackets to satisfy markdownlint MD034 (no-bare-urls) check.
…tead of "unknown" in telemetry
9ac3a4e to
242a538
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/modules/commands.js (1)
3667-3680: Consider refactoring the nested ternary chains for better readability.While the current implementation is functionally correct and consistent with existing code, the deeply nested ternary chains are difficult to read and maintain. As more providers are added, this pattern becomes increasingly unwieldy.
Consider using an object lookup pattern for cleaner code:
+// At the top of the function or in a shared location +const PROVIDER_HINT_MAP = { + openrouter: 'openrouter', + ollama: 'ollama', + bedrock: 'bedrock', + claudeCode: 'claude-code', + geminiCli: 'gemini-cli', + codexCli: 'codex-cli' +}; + +function getProviderHint(options) { + for (const [key, value] of Object.entries(PROVIDER_HINT_MAP)) { + if (options[key]) return value; + } + return undefined; +}Then use it:
-providerHint: options.openrouter - ? 'openrouter' - : options.ollama - ? 'ollama' - : options.bedrock - ? 'bedrock' - : options.claudeCode - ? 'claude-code' - : options.geminiCli - ? 'gemini-cli' - : options.codexCli - ? 'codex-cli' - : undefined +providerHint: getProviderHint(options)This would make the code more maintainable and reduce duplication across the three model role assignments.
Also applies to: 3695-3707, 3725-3737
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (22)
.changeset/brave-lions-sing.md(1 hunks)README.md(1 hunks)docs/command-reference.md(1 hunks)docs/configuration.md(1 hunks)docs/examples/codex-cli-usage.md(1 hunks)docs/providers/codex-cli.md(1 hunks)package.json(1 hunks)scripts/modules/ai-services-unified.js(2 hunks)scripts/modules/commands.js(6 hunks)scripts/modules/config-manager.js(11 hunks)scripts/modules/supported-models.json(1 hunks)scripts/modules/task-manager/models.js(2 hunks)src/ai-providers/base-provider.js(5 hunks)src/ai-providers/claude-code.js(1 hunks)src/ai-providers/codex-cli.js(1 hunks)src/ai-providers/gemini-cli.js(1 hunks)src/ai-providers/grok-cli.js(1 hunks)src/ai-providers/index.js(1 hunks)src/constants/providers.js(1 hunks)tests/unit/ai-providers/codex-cli.test.js(1 hunks)tests/unit/ai-services-unified.test.js(4 hunks)tests/unit/config-manager.test.js(4 hunks)
🧰 Additional context used
📓 Path-based instructions (26)
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Files:
src/ai-providers/codex-cli.jssrc/ai-providers/index.jssrc/ai-providers/grok-cli.jssrc/ai-providers/claude-code.jssrc/ai-providers/gemini-cli.jssrc/ai-providers/base-provider.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
src/ai-providers/codex-cli.jstests/unit/ai-services-unified.test.jssrc/ai-providers/index.jsscripts/modules/config-manager.jssrc/ai-providers/grok-cli.jssrc/constants/providers.jssrc/ai-providers/claude-code.jstests/unit/config-manager.test.jssrc/ai-providers/gemini-cli.jsscripts/modules/commands.jsscripts/modules/ai-services-unified.jsscripts/modules/task-manager/models.jssrc/ai-providers/base-provider.jstests/unit/ai-providers/codex-cli.test.js
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
tests/unit/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module should have a corresponding unit test file in tests/unit/ that reflects the module structure (one test file per module).
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/unit/ai-services-unified.test.jstests/unit/config-manager.test.jstests/unit/ai-providers/codex-cli.test.js
package.json
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Add and update test scripts in package.json to include test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci
Files:
package.json
scripts/modules/config-manager.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
scripts/modules/config-manager.js: Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.
scripts/modules/config-manager.js: Import and use specific getters fromscripts/modules/config-manager.jsto access configuration values needed for application logic; pass theexplicitRootparameter to getters if calling from MCP direct functions.
UseisApiKeySet(providerName, session)fromconfig-manager.jsto check if a provider's key is available before attempting an AI call.
Handle potentialConfigurationErrorif the.taskmasterconfigfile is missing or invalid when accessed viagetConfig.
Files:
scripts/modules/config-manager.js
scripts/modules/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).
scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...
Files:
scripts/modules/config-manager.jsscripts/modules/commands.jsscripts/modules/ai-services-unified.js
scripts/modules/**
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
When using the MCP server, restart it if core logic in
scripts/modulesor MCP tool/direct function definitions change.
Files:
scripts/modules/config-manager.jsscripts/modules/supported-models.jsonscripts/modules/commands.jsscripts/modules/ai-services-unified.jsscripts/modules/task-manager/models.js
scripts/modules/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag
Files:
scripts/modules/config-manager.jsscripts/modules/supported-models.jsonscripts/modules/commands.jsscripts/modules/ai-services-unified.js
docs/**/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.
Files:
docs/examples/codex-cli-usage.mddocs/providers/codex-cli.mddocs/configuration.mddocs/command-reference.md
scripts/modules/supported-models.json
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Files:
scripts/modules/supported-models.json
scripts/modules/commands.js
📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)
scripts/modules/commands.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).
Determine the appropriaterole(main,research,fallback) in your core logic and pass it to the service.
Pass thesessionobject (received in thecontextparameter, especially from direct function wrappers) to the service call when in MCP context.
UsegenerateTextServiceand implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, asgenerateObjectServicehas shown unreliability with some providers/schemas.
Be aware of potential reliability issues withgenerateObjectServiceacross different providers and complex schemas. PrefergenerateTextService+ manual parsing as a more robust alternative for structured data needs.
scripts/modules/commands.js: All new user-facing commands should be added to 'scripts/modules/commands.js'.
Use consistent patterns for option naming and help text in CLI commands.
Follow the Commander.js model for subcommand structure in CLI commands.
When using callbacks (like in Commander.js commands), define them separately to allow testing the callback logic independently.
Add help text to the command definition and update 'dev_workflow.mdc' with command reference when adding a new feature.
Follow the established pattern in 'commands.js' for CLI command implementation, using Commander.js for argument parsing, including comprehensive help text and examples, and supporting tagged task context awareness.
Provide clear error messages for common failu...
Files:
scripts/modules/commands.js
scripts/modules/ai-services-unified.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
scripts/modules/ai-services-unified.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).The telemetryData object returned by ai-services-unified.js must include the fields: timestamp, userId, commandName, modelUsed, providerName, inputTokens, outputTokens, totalTokens, totalCost, and currency.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/ai-services*.js
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Ensure AI calls correctly handle and propagate telemetryData as described in 'telemetry.mdc'.
Files:
scripts/modules/ai-services-unified.js
.changeset/*.md
📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)
.changeset/*.md: When runningnpm run changesetornpx changeset add, provide a concise summary of the changes for theCHANGELOG.mdin imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.
Files:
.changeset/brave-lions-sing.md
.changeset/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.
Files:
.changeset/brave-lions-sing.md
scripts/modules/task-manager/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)
scripts/modules/task-manager/*.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).
Determine the appropriaterole(main,research,fallback) in your core logic and pass it to the service.
Pass thesessionobject (received in thecontextparameter, especially from direct function wrappers) to the service call when in MCP context.
UsegenerateTextServiceand implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, asgenerateObjectServicehas shown unreliability with some providers/schemas.
Be aware of potential reliability issues withgenerateObjectServiceacross different providers and complex schemas. PrefergenerateTextService+ manual parsing as a more robust alternative for structured data needs.Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.
Files:
scripts/modules/task-manager/models.js
scripts/modules/task-manager/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)
Files:
scripts/modules/task-manager/models.js
scripts/modules/task-manager/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)
scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.Do not call AI-specific getters (like
getMainModelId,getMainMaxTokens) from core logic functions inscripts/modules/task-manager/*; instead, pass theroleto the unified AI service.
Files:
scripts/modules/task-manager/models.js
tests/unit/ai-providers/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Create unit tests for the new provider in tests/unit/ai-providers/.test.js, mocking @ai-sdk/ and core ai module functions, and testing all exported functions for correct behavior and error handling.
Files:
tests/unit/ai-providers/codex-cli.test.js
🧠 Learnings (18)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
📚 Learning: 2025-09-19T16:06:42.182Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1200
File: src/ai-providers/custom-sdk/grok-cli/language-model.js:96-100
Timestamp: 2025-09-19T16:06:42.182Z
Learning: The user Crunchyman-ralph prefers to keep environment variable names explicit (like GROK_CLI_API_KEY) rather than supporting multiple aliases, to avoid overlap and ensure clear separation between different CLI implementations.
Applied to files:
src/ai-providers/codex-cli.js
📚 Learning: 2025-09-29T13:33:46.952Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.952Z
Learning: Claude Code provider should use environment variable isolation to control API key access, temporarily managing ANTHROPIC_API_KEY during client creation to prevent the ai-sdk-provider-claude-code package from automatically picking up API keys intended for other providers, while allowing explicit CLAUDE_CODE_API_KEY usage as a fallback to OAuth authentication.
Applied to files:
src/ai-providers/codex-cli.js
📚 Learning: 2025-09-29T13:33:46.952Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.952Z
Learning: Claude Code provider should use environment variable isolation to temporarily manage ANTHROPIC_API_KEY during client creation: if CLAUDE_CODE_API_KEY is set, temporarily set ANTHROPIC_API_KEY to that value; if CLAUDE_CODE_API_KEY is not set but ANTHROPIC_API_KEY exists, temporarily unset ANTHROPIC_API_KEY to force OAuth mode. This prevents the ai-sdk-provider-claude-code package from accidentally using API keys intended for the regular Anthropic provider while still allowing explicit API key usage as a fallback.
Applied to files:
src/ai-providers/codex-cli.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
src/ai-providers/codex-cli.jssrc/ai-providers/index.jsscripts/modules/ai-services-unified.jstests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
tests/unit/ai-services-unified.test.jstests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.jstests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.jstests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Applied to files:
src/ai-providers/index.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/commands.js : Use consistent patterns for option naming and help text in CLI commands.
Applied to files:
scripts/modules/commands.js
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .taskmaster/config.json : All other Taskmaster settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in .taskmaster/config.json via the task-master models command or models MCP tool.
Applied to files:
scripts/modules/commands.jsdocs/command-reference.md
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
Applied to files:
scripts/modules/commands.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/ai-services-unified.js : Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
Applied to files:
docs/command-reference.md
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Mock the action handlers for CLI commands and verify they're called with correct arguments.
Applied to files:
tests/unit/ai-providers/codex-cli.test.js
📚 Learning: 2025-09-29T13:33:46.952Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.952Z
Learning: Claude Code provider should use CLAUDE_CODE_API_KEY environment variable instead of automatically picking up ANTHROPIC_API_KEY to avoid conflicts, since the ai-sdk-provider-claude-code package follows standard Anthropic patterns and prioritizes API keys over OAuth authentication, which can cause unexpected API charges.
Applied to files:
README.md
🧬 Code graph analysis (7)
src/ai-providers/codex-cli.js (2)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-377)scripts/modules/config-manager.js (3)
settings(445-445)settings(455-455)settings(471-471)
tests/unit/ai-services-unified.test.js (2)
tests/unit/config-manager.test.mjs (1)
mockResolveEnvVariable(44-44)scripts/modules/ai-services-unified.js (1)
result(430-430)
scripts/modules/config-manager.js (2)
src/constants/commands.js (2)
AI_COMMAND_NAMES(7-17)AI_COMMAND_NAMES(7-17)src/constants/providers.js (2)
CUSTOM_PROVIDERS(18-29)CUSTOM_PROVIDERS(18-29)
tests/unit/config-manager.test.js (1)
tests/unit/config-manager.test.mjs (1)
DEFAULT_CONFIG(61-90)
scripts/modules/ai-services-unified.js (1)
src/ai-providers/codex-cli.js (1)
CodexCliProvider(15-106)
scripts/modules/task-manager/models.js (2)
mcp-server/src/core/direct-functions/models.js (1)
providerHint(41-41)src/constants/providers.js (2)
CUSTOM_PROVIDERS(18-29)CUSTOM_PROVIDERS(18-29)
tests/unit/ai-providers/codex-cli.test.js (1)
src/ai-providers/codex-cli.js (1)
CodexCliProvider(15-106)
🪛 markdownlint-cli2 (0.18.1)
docs/providers/codex-cli.md
92-92: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
350-350: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
355-355: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
360-360: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
374-374: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
379-379: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
385-385: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
400-400: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
405-405: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
410-410: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
412-412: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
444-444: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
454-454: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
469-469: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
478-478: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
487-487: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
.changeset/brave-lions-sing.md
5-5: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
docs/configuration.md
443-443: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
448-448: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
450-450: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
455-455: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
458-458: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
462-462: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
518-518: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
540-540: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🔇 Additional comments (30)
scripts/modules/config-manager.js (9)
61-61: LGTM: Consistent with other CLI provider defaults.The empty
codexCliobject follows the same pattern asclaudeCode, which is appropriate since Codex CLI uses OAuth-first authentication and all settings are optional.
142-142: LGTM: Correct configuration merging.The
codexCliconfiguration merging follows the established pattern used for other CLI providers likeclaudeCode.
189-191: LGTM: Proper validation flow.The validation check for
codexClisettings correctly mirrors the pattern used forclaudeCode, ensuring consistency across CLI providers.
374-423: LGTM: Comprehensive validation schema.The validation schema for Codex CLI settings is well-structured and includes all relevant options from the
ai-sdk-provider-codex-clipackage:
- Security modes (approvalMode, sandboxMode)
- Execution options (fullAuto, dangerouslyBypassApprovalsAndSandbox)
- Configuration (codexPath, cwd, env)
- Command-specific overrides
The implementation correctly mirrors the
validateClaudeCodeSettingspattern with proper error handling.
435-448: LGTM: Consistent getter implementations.The
getCodexCliSettingsandgetCodexCliSettingsForCommandfunctions follow the established pattern from Claude Code, ensuring consistent behavior across CLI providers.
567-569: LGTM: Correct codebase analysis capability.Adding
CODEX_CLIto the codebase analysis check is appropriate, as Codex CLI supports codebase analysis features as documented in the PR objectives.
919-920: LGTM: Correct MCP API key status.Returning
trueforcodex-cliingetMcpApiKeyStatusis correct since Codex CLI uses OAuth authentication and doesn't require an API key in the MCP configuration.
1108-1115: LGTM: Correct provider classification.Adding
CODEX_CLIto theprovidersWithoutApiKeysarray is appropriate since it uses OAuth-first authentication.
1126-1128: LGTM: Complete export list.The new Codex CLI configuration functions are properly exported, maintaining consistency with other CLI provider exports.
Also applies to: 1136-1136
src/ai-providers/claude-code.js (1)
37-38: LGTM: Consistent capability flag.Setting
supportsTemperaturetofalsealigns Claude Code with other CLI providers (Gemini CLI, Grok CLI, Codex CLI) that don't support the temperature parameter. This ensures the base provider doesn't include temperature in requests to Claude Code.README.md (1)
91-93: LGTM: Clear documentation updates.The README correctly documents that:
- Codex CLI uses OAuth via ChatGPT subscription
- No API key is required when using Claude Code or Codex CLI with OAuth
This accurately reflects the OAuth-first authentication model described in the PR objectives.
.changeset/brave-lions-sing.md (2)
5-5: Ignore markdown lint warning.The markdown lint warning about the first line not being a top-level heading is a false positive. Changeset files use YAML frontmatter at the top (lines 1-4), making line 5 the appropriate location for the first heading.
9-9: Ignore this suggestion: the Codex CLI provider intentionally uses OPENAI_CODEX_API_KEY to isolate its key (then maps it to OPENAI_API_KEY at runtime), so no changes are needed.Likely an incorrect or invalid review comment.
tests/unit/config-manager.test.js (2)
152-152: LGTM: Test data updated correctly.The
DEFAULT_CONFIGtest fixture correctly includes thecodexClisection, matching the implementation defaults.
646-647: LGTM: Consistent test expectations.The test expectations correctly include
DEFAULT_CONFIG.codexCliin all merged config scenarios, ensuring the Codex CLI configuration is properly validated alongside other CLI provider settings.Also applies to: 690-691, 800-801
src/ai-providers/grok-cli.js (1)
16-17: LGTM: Consistent capability flag.Setting
supportsTemperaturetofalseis appropriate for Grok CLI and maintains consistency with other CLI providers that don't support temperature configuration.src/constants/providers.js (1)
27-28: LGTM: Correct provider constant addition.The
CODEX_CLIentry follows the established naming convention and pattern. The trailing comma on the previous line is good practice for cleaner diffs in future changes.package.json (1)
74-74: Verified ai-sdk-provider-codex-cli@^0.3.0 is publisheddocs/command-reference.md (1)
386-390: LGTM! Clear and consistent Codex CLI examples.The new example commands correctly demonstrate setting Codex CLI models for both main and fallback roles using the
--codex-cliflag. The format is consistent with existing provider examples and aligns with the provider integration.src/ai-providers/index.js (1)
20-20: LGTM! Proper provider export.The export of
CodexCliProviderfollows the established pattern and correctly extends the module's public API.scripts/modules/ai-services-unified.js (2)
44-44: LGTM! Proper import of CodexCliProvider.The import is correctly placed and follows the established pattern for provider imports.
74-74: LGTM! Proper provider registration.The Codex CLI provider is correctly registered in the PROVIDERS map, consistent with other CLI-backed providers. The implementation appropriately keeps provider-specific logic inside the provider class.
docs/configuration.md (1)
433-580: LGTM! Comprehensive Codex CLI configuration documentation.The documentation is thorough and well-organized, covering:
- Prerequisites and installation steps
- OAuth authentication flow (primary method) and optional API key usage
- Available models with accurate token limits (272K input / 128K output as verified)
- Detailed settings reference with all configuration options
- Command-specific overrides with examples
- Codebase analysis features
- Comprehensive troubleshooting section
The content is accurate, practical, and provides excellent guidance for users.
Note: Static analysis detects MD031 violations (missing blank lines around fenced code blocks) at lines 443, 448, 450, 455, 458, 462, 518, 540. These are minor formatting issues that don't affect the documentation's usability.
docs/examples/codex-cli-usage.md (1)
1-463: LGTM! Excellent comprehensive usage examples.This documentation provides practical, end-to-end examples covering diverse Codex CLI provider scenarios:
- Basic operations (task creation, PRD parsing, task expansion)
- Configuration patterns (custom settings, command-specific overrides)
- Workflow demonstrations (feature planning, daily development)
- Troubleshooting common issues
- CI/CD integration with GitHub Actions
The examples are well-organized, clear, and provide valuable guidance for users implementing the Codex CLI provider.
docs/providers/codex-cli.md (1)
1-510: LGTM! Comprehensive Codex CLI provider documentation.This documentation thoroughly covers the Codex CLI provider integration:
- Clear explanation of why to use Codex CLI (OAuth access to GPT-5 models)
- Step-by-step quickstart guide
- Detailed authentication methods (OAuth-first, optional API key)
- Complete settings reference with all configuration options
- Command-specific overrides
- Codebase analysis capabilities
- Comprehensive troubleshooting section
The token limits (272K input / 128K output = 400K total) have been verified as accurate per previous review discussions.
Note: Static analysis detects MD031 violations (missing blank lines around fenced code blocks) at multiple locations. These are minor formatting issues that don't impact the documentation's quality or usability.
tests/unit/ai-providers/codex-cli.test.js (1)
1-92: LGTM! Well-structured unit tests for CodexCliProvider.The test implementation follows best practices:
- Proper mock setup using
jest.unstable_mockModulebefore imports (correct pattern per coding guidelines)- Comprehensive mocking of external dependencies (ai, ai-sdk-provider-codex-cli, config-manager, base-provider)
- Clear test cases covering:
- Provider identification (name and supported models)
- API key requirement (correctly returns false for OAuth-first provider)
- Client creation with command-specific settings
- Conditional API key injection behavior
The tests are focused, maintainable, and provide good coverage of the provider's core functionality.
Based on learnings
scripts/modules/supported-models.json (1)
72-95: LGTM! Correct Codex CLI model configuration.The codex-cli models are properly configured:
- Models: gpt-5 and gpt-5-codex with consistent settings
- SWE Score: 0.749 (appropriate for GPT-5 capabilities)
- Pricing: Zero cost (correct for OAuth subscription models without API billing)
- Allowed Roles: Includes "research" role (intentionally added per commit aa5b736, as both models support web search tools)
- max_tokens: 128000 (verified as correct max output token limit)
The configuration aligns with the PR objectives and has been validated through previous review discussions.
scripts/modules/commands.js (3)
3589-3592: LGTM! Option definition follows established patterns.The
--codex-clioption is correctly defined using kebab-case naming and follows the same structure as other custom provider options (e.g.,--gemini-cli,--claude-code). The help text is clear and consistent.
3625-3631: LGTM! Provider flag validation correctly updated.The new
options.codexCliflag is properly added to the provider flags array and included in the error message. The validation ensures users cannot specify multiple provider flags simultaneously, which maintains correct behavior.
3608-3608: LGTM! Example usage is clear and helpful.The example demonstrates correct usage of the
--codex-cliflag with the--set-mainoption, following the same pattern as other provider examples. The comment clearly indicates its purpose.
| // Codex CLI supports OAuth via codex login; API key optional | ||
| if (providerName?.toLowerCase() === 'codex-cli') { | ||
| return true; // Treat as OK even without key | ||
| } |
There was a problem hiding this comment.
Remove redundant check.
The specific check for codex-cli at lines 811-814 is redundant because CODEX_CLI is already included in the providersWithoutApiKeys array at lines 798-800. The function returns true at line 803 before reaching this code block.
Apply this diff to remove the redundant check:
- // Codex CLI supports OAuth via codex login; API key optional
- if (providerName?.toLowerCase() === 'codex-cli') {
- return true; // Treat as OK even without key
- }
-🤖 Prompt for AI Agents
In scripts/modules/config-manager.js around lines 811-814, remove the redundant
providerName === 'codex-cli' branch (the return true block) since CODEX_CLI is
already included in providersWithoutApiKeys and the function returns true
earlier; simply delete those lines and run tests to ensure no other logic
depended on that branch.
| this.supportsTemperature = false; | ||
| } |
There was a problem hiding this comment.
Remove temperature when provider doesn’t support it.
You set supportsTemperature = false, but generateText, streamText, and the manual generateObject retry still pass temperature unconditionally. That keeps emitting the unsupported-parameter warning we’re trying to kill. Gate the temperature field with the same conditional spread used in BaseAIProvider (only include it when supportsTemperature is true).
- const result = await generateText({
+ const result = await generateText({
model: client(params.modelId),
system: systemPrompt,
messages: messages,
- maxOutputTokens: params.maxTokens,
- temperature: params.temperature
+ maxOutputTokens: params.maxTokens,
+ ...(this.supportsTemperature && params.temperature !== undefined
+ ? { temperature: params.temperature }
+ : {})
});Apply the same pattern to the stream and manual object-generation calls.
Also applies to: 430-434, 533-538, 588-597
🤖 Prompt for AI Agents
In src/ai-providers/gemini-cli.js around lines 21-22 (and also update the call
sites at 430-434, 533-538, and 588-597), you're setting supportsTemperature =
false but still passing temperature unconditionally to generateText, streamText,
and the manual generateObject retry; remove the unsupported-parameter warnings
by gating the temperature field the same way BaseAIProvider does — only include
temperature in the options/spread when this.supportsTemperature is true (use a
conditional spread or conditional property inclusion) for all three call sites
so temperature is omitted when false.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
packages/tm-core/src/repositories/supabase-task-repository.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
There was a problem hiding this comment.
Actionable comments posted: 4
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
tests/unit/ai-services-unified.test.js(4 hunks)tests/unit/providers/base-provider.test.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
tests/unit/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module should have a corresponding unit test file in tests/unit/ that reflects the module structure (one test file per module).
Files:
tests/unit/ai-services-unified.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Applies to src/ai-providers/*.js : Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.
Applied to files:
tests/unit/ai-services-unified.test.jstests/unit/providers/base-provider.test.js
🧬 Code graph analysis (2)
tests/unit/ai-services-unified.test.js (2)
tests/unit/config-manager.test.mjs (1)
mockResolveEnvVariable(44-44)scripts/modules/ai-services-unified.js (1)
result(430-430)
tests/unit/providers/base-provider.test.js (6)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-377)src/ai-providers/claude-code.js (1)
ClaudeCodeProvider(30-129)src/ai-providers/codex-cli.js (1)
CodexCliProvider(15-106)src/ai-providers/gemini-cli.js (1)
GeminiCliProvider(14-651)src/ai-providers/anthropic.js (1)
AnthropicAIProvider(20-61)src/ai-providers/openai.js (1)
OpenAIProvider(9-47)
🪛 GitHub Actions: CI
tests/unit/providers/base-provider.test.js
[error] 1-1: BIOME format-check failed. Formatting issues detected in the codebase. Run 'npm run format-check' locally or 'biome format .' to fix.
🔇 Additional comments (5)
tests/unit/providers/base-provider.test.js (5)
489-498: Address the NaN temperature validation gap.The test explicitly notes that NaN temperature passes validation when it should fail (see comment "This is actually a bug"). While the test correctly documents current behavior, this validation gap should be addressed.
Should this be tracked as a follow-up issue? The current validation only checks if
temperature < 0 || temperature > 1, which allows NaN to pass since NaN comparisons return false.Consider updating the validation in
src/ai-providers/base-provider.jsto explicitly reject NaN:validateOptionalParams(params) { if ( params.temperature !== undefined && - (params.temperature < 0 || params.temperature > 1) + (isNaN(params.temperature) || params.temperature < 0 || params.temperature > 1) ) { throw new Error('Temperature must be between 0 and 1'); }
1-76: LGTM: Mock setup and test structure.The mock setup correctly uses
jest.unstable_mockModulebefore imports (ES module pattern), and theTestProvidersubclass is an appropriate way to test the abstractBaseAIProviderclass.
78-237: LGTM: Comprehensive parameter validation coverage.The validation tests thoroughly cover authentication, required parameters, optional parameters, messages structure, and token parameter preparation, including boundary conditions and type coercion.
239-381: LGTM: JSON repair and usage normalization tests.The tests properly validate the JSON repair workflow for malformed responses, error propagation when repair fails, and graceful handling of both legacy (
promptTokens/completionTokens) and current (inputTokens/outputTokens) token formats.
567-620: LGTM: Temperature filtering logic tests.The tests correctly verify that temperature is conditionally included in API calls based on the
supportsTemperatureflag, covering all three scenarios: supported (included), unsupported (excluded), and undefined (excluded even if supported).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/unit/ai-providers/base-provider.test.js (1)
606-621: Remove the post-importjest.unstable_mockModulecall.This nested remock still runs after
BaseAIProviderhas already imported theaimodule, so Jest throws and these tests never execute. The top-level mock wiring is sufficient—drop the redundant mocks and rely onjest.clearAllMocks().- const mockStreamText = jest.fn(); - const mockStreamObject = jest.fn(); - - beforeEach(() => { - jest.unstable_mockModule('ai', () => ({ - generateText: mockGenerateText, - streamText: mockStreamText, - generateObject: mockGenerateObject, - streamObject: mockStreamObject, - zodSchema: jest.fn((schema) => schema), - NoObjectGeneratedError: mockNoObjectGeneratedError, - JSONParseError: mockJSONParseError - })); - mockStreamText.mockReset(); - mockStreamObject.mockReset(); - }); + const mockStreamText = jest.fn(); + const mockStreamObject = jest.fn(); + + beforeEach(() => { + mockStreamText.mockReset(); + mockStreamObject.mockReset(); + });Based on learnings
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
tests/integration/providers/temperature-support.test.js(1 hunks)tests/unit/ai-providers/base-provider.test.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
tests/unit/ai-providers/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Create unit tests for the new provider in tests/unit/ai-providers/.test.js, mocking @ai-sdk/ and core ai module functions, and testing all exported functions for correct behavior and error handling.
Files:
tests/unit/ai-providers/base-provider.test.js
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/ai-providers/base-provider.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
tests/integration/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.
Files:
tests/integration/providers/temperature-support.test.js
tests/{integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Properly mock session objects when required by functions, and reset environment variables between tests if modified.
Files:
tests/integration/providers/temperature-support.test.js
🧠 Learnings (3)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
tests/unit/ai-providers/base-provider.test.jstests/integration/providers/temperature-support.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/unit/ai-providers/base-provider.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Module dependencies should be mocked before importing the test module, following Jest's hoisting behavior.
Applied to files:
tests/unit/ai-providers/base-provider.test.js
🧬 Code graph analysis (2)
tests/unit/ai-providers/base-provider.test.js (1)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-377)
tests/integration/providers/temperature-support.test.js (8)
src/ai-providers/claude-code.js (1)
ClaudeCodeProvider(30-129)src/ai-providers/codex-cli.js (1)
CodexCliProvider(15-106)src/ai-providers/gemini-cli.js (1)
GeminiCliProvider(14-651)src/ai-providers/openai.js (1)
OpenAIProvider(9-47)src/ai-providers/ollama.js (1)
OllamaAIProvider(9-54)src/ai-providers/bedrock.js (1)
BedrockAIProvider(5-46)src/ai-providers/azure.js (1)
AzureProvider(9-60)src/ai-providers/google-vertex.js (1)
VertexAIProvider(37-158)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/unit/ai-providers/base-provider.test.js (1)
606-612: Remove unused mock declarations.The
mockStreamTextandmockStreamObjectvariables declared at lines 606-607 are never used in any test within this describe block. The tests only usemockGenerateText, making these declarations and their resets unnecessary.Apply this diff to remove the unused code:
describe('11. Temperature Filtering - CLI vs Standard Providers', () => { - const mockStreamText = jest.fn(); - const mockStreamObject = jest.fn(); - - beforeEach(() => { - mockStreamText.mockReset(); - mockStreamObject.mockReset(); - }); - it('should include temperature in generateText when supported', async () => {Based on past review comments
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
CLAUDE.md(1 hunks)tests/integration/providers/temperature-support.test.js(1 hunks)tests/unit/ai-providers/base-provider.test.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
tests/integration/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.
Files:
tests/integration/providers/temperature-support.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
tests/{integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Properly mock session objects when required by functions, and reset environment variables between tests if modified.
Files:
tests/integration/providers/temperature-support.test.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
tests/unit/ai-providers/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Create unit tests for the new provider in tests/unit/ai-providers/.test.js, mocking @ai-sdk/ and core ai module functions, and testing all exported functions for correct behavior and error handling.
Files:
tests/unit/ai-providers/base-provider.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/ai-providers/base-provider.test.js
🧠 Learnings (11)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/integration/providers/temperature-support.test.jstests/unit/ai-providers/base-provider.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:10:53.657Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/glossary.mdc:0-0
Timestamp: 2025-07-18T17:10:53.657Z
Learning: Guidelines for implementing and maintaining tests for Task Master CLI (tests.mdc).
Applied to files:
CLAUDE.md
📚 Learning: 2025-08-11T12:30:23.843Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Applies to tests/{unit,integration,e2e,fixtures}/**/*.js : Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test core logic independently with both data formats, mock file system operations, test tag resolution behavior, and verify migration compatibility in unit tests.
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test CLI and MCP interfaces with real task data, verify end-to-end workflows across tag contexts, and test error scenarios and recovery in integration tests.
Applied to files:
CLAUDE.md
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Module dependencies should be mocked before importing the test module, following Jest's hoisting behavior.
Applied to files:
tests/unit/ai-providers/base-provider.test.js
🧬 Code graph analysis (2)
tests/integration/providers/temperature-support.test.js (3)
src/ai-providers/claude-code.js (1)
ClaudeCodeProvider(30-129)src/ai-providers/codex-cli.js (1)
CodexCliProvider(15-106)src/ai-providers/gemini-cli.js (1)
GeminiCliProvider(14-651)
tests/unit/ai-providers/base-provider.test.js (1)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-377)
🪛 markdownlint-cli2 (0.18.1)
CLAUDE.md
9-9: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
14-14: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (15)
tests/integration/providers/temperature-support.test.js (2)
13-27: LGTM! Past review concern addressed.All provider imports are now synchronous top-level imports, eliminating the need for async/await in test functions. This aligns with the coding guidelines.
29-62: LGTM! Test structure is correct and thorough.The test suite correctly verifies
supportsTemperaturebehavior across all provider types:
- CLI providers (Claude Code, Codex CLI, Gemini CLI, Grok CLI) correctly expect
false- Standard API providers correctly expect
true- Special case providers (Ollama, cloud providers) correctly expect
trueAll test functions are synchronous as required by the guidelines.
CLAUDE.md (1)
7-28: LGTM! Clear and helpful test guidelines.The new "Test Guidelines" section provides clear, actionable guidance on synchronous testing with good examples. The caveat "unless testing actual asynchronous operations" appropriately allows for legitimate async test cases.
tests/unit/ai-providers/base-provider.test.js (12)
1-55: LGTM! Mock setup follows ES module best practices.The mock setup correctly uses
jest.unstable_mockModulebefore the dynamic import, which is the proper pattern for ES modules. Custom error classes and mock implementations are well-structured.
57-81: LGTM! Test setup properly implements test isolation.The
beforeEachhook correctly creates a freshTestProviderinstance and clears all mocks, ensuring test isolation. The concreteTestProviderimplementation properly extendsBaseAIProviderwith required abstract methods.
83-201: LGTM! Comprehensive parameter validation coverage.These tests thoroughly cover validation edge cases, including boundary conditions, missing/invalid parameters, and proper error messages. Good coverage of both positive and negative paths.
203-233: LGTM! Error handling tests cover important edge cases.The tests properly verify error wrapping with context and graceful handling of errors without a
messageproperty, ensuring robust error reporting.
235-257: LGTM! Abstract class protection properly enforced.These tests correctly verify that
BaseAIProvidercannot be instantiated directly and that abstract methods throw appropriate errors when not implemented by subclasses.
259-279: LGTM! Token parameter preparation thoroughly tested.The tests cover important edge cases including string-to-number conversion, decimal flooring, and undefined handling, ensuring robust token parameter processing.
281-355: LGTM! JSON repair tests cover all critical paths.These tests thoroughly verify the JSON repair functionality, including successful repair, repair failure, and non-JSON error handling. Proper verification that original errors are propagated when repair is not applicable.
357-429: LGTM! Usage token normalization ensures backward compatibility.These tests properly verify support for both legacy and modern token usage formats, graceful handling of missing data, and correct totalTokens calculation. This ensures robust compatibility with different API response formats.
431-488: LGTM! Schema validation tests ensure proper object generation requirements.The tests correctly verify that schemas and object names are required for object generation/streaming, and that the correct mode is used when
needsExplicitJsonSchemais true.
490-529: LGTM! Integration tests verify correct parameter flow.These tests properly verify that parameters are passed correctly to
getClientand that the client is invoked with the correct model ID, ensuring proper integration between components.
531-603: LGTM! Edge cases and default behavior thoroughly tested.These tests provide excellent coverage of boundary conditions (zero/large maxTokens, NaN temperature, concurrent calls) and default behavior for
isRequiredApiKey. The test at lines 544-554 appropriately documents current behavior regarding NaN temperature validation.
614-667: LGTM! Temperature filtering tests properly verify the new capability.These tests correctly verify that temperature is conditionally included in API calls based on the
supportsTemperatureflag and that it's excluded when undefined, regardless of support. This aligns with the PR's goal of adding temperature support control to the BaseAIProvider.
| ### Synchronous Tests | ||
| - **NEVER use async/await in test functions** unless testing actual asynchronous operations | ||
| - Use synchronous top-level imports instead of dynamic `await import()` | ||
| - Test bodies should be synchronous whenever possible | ||
| - Example: | ||
| ```javascript |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor formatting improvements.
Static analysis suggests adding blank lines for better readability:
- Add a blank line before the "### Synchronous Tests" heading (line 9)
- Add blank lines before and after the code fence (line 14)
Apply this diff:
## Test Guidelines
+
### Synchronous Tests
- **NEVER use async/await in test functions** unless testing actual asynchronous operations
- Use synchronous top-level imports instead of dynamic `await import()`
- Test bodies should be synchronous whenever possible
+
- Example:
```javascript🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
9-9: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
14-14: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In CLAUDE.md around lines 9 to 14, add blank lines for readability: insert one
blank line immediately before the "### Synchronous Tests" heading, and insert a
blank line before the opening code fence and another blank line after the
closing code fence so the heading and fenced code block are separated from
surrounding text as requested.
This PR was automatically generated to update documentation based on recent changes. Original commit: feat: Add Codex CLI provider with OAuth authentication (#1273)\n\nCo-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>\n\n Co-authored-by: Claude <claude-assistant@anthropic.com>
…1273) Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>



Summary
Integrates
ai-sdk-provider-codex-cli(v0.3.0) to enable Task Master usage with OpenAI's Codex CLI via ChatGPT subscription OAuth authentication. This allows users to leverage GPT-5 and GPT-5-Codex models using their ChatGPT subscription without managing API keys.Key Features
codex login(no API key required)OPENAI_API_KEYgpt-5andgpt-5-codex(272K max input / 128K max output)Implementation Details
Code Changes
New Files:
src/ai-providers/codex-cli.js- OAuth-first provider implementation extending BaseAIProvidertests/unit/ai-providers/codex-cli.test.js- Comprehensive unit testsProvider Integration:
CodexCliProviderclass with OAuth-first authentication patterngetClient(),validateAuth(), andisRequiredApiKey()methodsConfiguration Management:
codexClisection to config with Zod validation schemacommandSpecificobjectgetCodexCliSettings()andgetCodexCliSettingsForCommand()CLI Integration:
--codex-cliflag totask-master modelscommandssetModel()functionhasCodebaseAnalysischeck to include codex-cliprovidersWithoutApiKeysarrayModel Definitions:
gpt-5andgpt-5-codextosupported-models.jsonDocumentation
Comprehensive Guides:
docs/providers/codex-cli.md(505 lines) - Complete provider documentationdocs/examples/codex-cli-usage.md(464 lines) - Practical examplesUpdated Documentation:
docs/configuration.md- Added 150+ line Codex CLI provider sectiondocs/command-reference.md- Added CLI flag examplesREADME.md- Updated providers listTesting
CodexCliProvider(authentication, client creation, API key injection)Architecture Pattern
This implementation follows the ClaudeCodeProvider pattern (simple OAuth-first approach) rather than the GeminiCliProvider pattern, prioritizing:
Breaking Changes
None - this is a purely additive feature.
Migration Notes
None required - new provider only.
Related Issues
Closes #[issue-number-if-applicable]
Checklist
Screenshots/Examples
Example configuration:
{ "models": { "main": { "provider": "codex-cli", "modelId": "gpt-5-codex", "maxTokens": 128000, "temperature": 0.2 } }, "codexCli": { "allowNpx": true, "approvalMode": "on-failure", "sandboxMode": "workspace-write" } }Usage:
Additional Context
This provider enables access to GPT-5 and GPT-5-Codex models through ChatGPT subscription, offering a cost-effective alternative to direct API usage for users with Plus, Pro, Business, Edu, or Enterprise plans.
Summary by CodeRabbit
New Features
Changes
Documentation
Tests
Chores