Conversation
🦋 Changeset detectedLatest commit: bf85808 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughMigrates local Claude/Grok custom SDKs to external ai‑sdk provider packages, introduces a new Grok CLI provider package, unifies token parameter to maxOutputTokens, tightens Zod schemas and subtask ID/default behavior, updates docs/tests, and performs broad dependency and tooling version upgrades for AI SDK v5. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant TM as Task Master
participant Base as BaseAIProvider
participant Prov as Provider (OpenAI/Gemini/Claude/Ollama)
Note over Base,Prov: Token parameter unified to maxOutputTokens
TM->>Base: generate/stream/object request(model, tokens, messages)
Base->>Base: prepareTokenParam(...) -> { maxOutputTokens }
Base->>Prov: call generate/stream(..., maxOutputTokens)
Prov-->>Base: result / stream events
Base-->>TM: aggregated result / streamed chunks
sequenceDiagram
autonumber
participant TM as Task Master
participant CC as ClaudeCodeProvider
participant SDK as ai-sdk-provider-claude-code
participant CLI as Claude Code CLI (optional)
TM->>CC: getClient({ commandName?, settings? })
CC->>CC: read command-specific settings
CC->>CLI: optional availability check (claude --version)
alt CLI unavailable and check runs
CC-->>TM: throw Error("Claude Code CLI not available")
else
CC->>SDK: createClaudeCode(defaultSettings)
SDK-->>CC: client { languageModel, chat, ... }
CC-->>TM: client
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/ai-providers/ollama.js (1)
51-53: Return null for API key name when API key is not required
isRequiredApiKey()returns false, butgetRequiredApiKeyName()returnsOLLAMA_API_KEY. This is misleading and may surface incorrect prompts.Apply this diff:
- getRequiredApiKeyName() { - return 'OLLAMA_API_KEY'; - } + getRequiredApiKeyName() { + return null; + }src/ai-providers/base-provider.js (2)
129-144: Update JSDoc to reflect unified maxOutputTokens parameterDoc still mentions legacy names; align with v5 naming to avoid confusion.
Apply this diff:
/** - * Prepares token limit parameter based on model requirements + * Prepares token limit parameter for AI SDK v5 providers * @param {string} modelId - The model ID - * @param {number} maxTokens - The maximum tokens value - * @returns {object} Object with either maxTokens or max_completion_tokens + * @param {number} maxTokens - The maximum output tokens value + * @returns {object} Object with maxOutputTokens */
231-239: Use prepareTokenParam() in streamObject for consistencyThis path bypasses normalization/rounding and may pass
undefined. Align with other methods.Apply this diff:
- const result = await streamObject({ + const result = await streamObject({ model: client(params.modelId), messages: params.messages, schema: zodSchema(params.schema), mode: params.mode || 'auto', - maxOutputTokens: params.maxTokens, + ...this.prepareTokenParam(params.modelId, params.maxTokens), temperature: params.temperature });scripts/modules/config-manager.js (1)
314-335: Invalid Zod v3 APIs (z.int, z.url, z.partialRecord) will throw at runtime.Zod v3 doesn’t expose z.int(), z.url(), or z.partialRecord(). Use z.number().int(), z.string().url(), and a regular record schema instead.
Apply:
- maxTurns: z.int().positive().optional(), + maxTurns: z.number().int().positive().optional(), @@ - env: z.record(z.string(), z.string()).optional(), - url: z.url().optional(), - headers: z.record(z.string(), z.string()).optional() + env: z.record(z.string(), z.string()).optional(), + url: z.string().url().optional(), + headers: z.record(z.string(), z.string()).optional() @@ - const CommandSpecificSchema = z.partialRecord( - z.enum(AI_COMMAND_NAMES), - BaseSettingsSchema - ); + const CommandSpecificSchema = z + .record(z.enum(AI_COMMAND_NAMES), BaseSettingsSchema) + .optional(); @@ - const SettingsSchema = BaseSettingsSchema.extend({ - commandSpecific: CommandSpecificSchema.optional() - }); + const SettingsSchema = BaseSettingsSchema.extend({ + commandSpecific: CommandSpecificSchema + });Also applies to: 338-346
tests/unit/ai-providers/gemini-cli.test.js (1)
432-439: Align earlier generateObject expectation with maxOutputTokens.This test still asserts maxTokens; provider now sends maxOutputTokens.
Apply:
- maxTokens: undefined, + maxOutputTokens: undefined, temperature: undefinedtests/unit/ai-providers/claude-code.test.js (1)
110-123: Remove async test; avoid dynamic import inside tests.Guidelines prohibit async tests. Import the mocked module at top-level (after unstable_mockModule) and make the test sync.
Apply this diff:
- it('should handle client initialization errors', async () => { - // Force an error by making createClaudeCode throw - const { createClaudeCode } = await import('ai-sdk-provider-claude-code'); - createClaudeCode.mockImplementationOnce(() => { - throw new Error('Mock initialization error'); - }); - - // Create a new provider instance to use the mocked createClaudeCode - const errorProvider = new ClaudeCodeProvider(); - expect(() => errorProvider.getClient({})).toThrow( - 'Mock initialization error' - ); - }); + it('should handle client initialization errors', () => { + createClaudeCode.mockImplementationOnce(() => { + throw new Error('Mock initialization error'); + }); + const errorProvider = new ClaudeCodeProvider(); + expect(() => errorProvider.getClient({})).toThrow('Mock initialization error'); + });Add this top-level import after mocking ai-sdk-provider-claude-code and before importing the SUT:
// Place after jest.unstable_mockModule('ai-sdk-provider-claude-code', ...) and before importing the provider const { createClaudeCode } = await import('ai-sdk-provider-claude-code');
02806aa to
2ec61d8
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
scripts/modules/task-manager/scope-adjustment.js (3)
354-367: Unify subtask dependency typing across modules (string vs int).Here subtasks require dependencies as strings (dotted "parentId.childId"), while update-task-by-id.js still validates subtasks with int dependencies and even parses strings via parseInt, which corrupts "1.2" to 1. This mismatch will cause validation/parsing failures and cross‑module inconsistency.
Recommend: standardize on dotted string dependencies for subtasks across the suite, or accept both at the schema layer and normalize. See my concrete fix on update-task-by-id.js to accept union([z.string(), z.int()]) and normalize correctly. Based on learnings
369-378: Prefer generateTextService + manual parsing over generateObjectService.Our guidance is to avoid generateObjectService for structured output due to provider variability; use generateTextService with robust manual parsing (and Zod validation) like expand-task.js and update‑tasks.js. As per coding guidelines
683-694: Display AI usage summary in CLI when telemetry is available.scopeUpTask/scopeDownTask aggregate telemetry but never call displayAiUsageSummary for CLI users.
Insert before returning:
if (outputFormat === 'text' && combinedTelemetryData) { const { displayAiUsageSummary } = await import('../ui.js'); displayAiUsageSummary(combinedTelemetryData, 'cli'); }tests/unit/ai-providers/gemini-cli.test.js (1)
422-438: Fix failing expectation after renamingmaxTokens→maxOutputTokens.The provider now forwards
maxOutputTokens, but this test still asserts onmaxTokens, so it will fail. Update the assertion to the new property:expect(generateObject).toHaveBeenCalledWith({ model: expect.objectContaining({ id: 'gemini-2.0-flash-exp', authOptions: expect.objectContaining({ authType: 'api-key', apiKey: 'test-key' }) }), messages: mockParams.messages, schema: mockParams.schema, mode: 'json', // Should use json mode for Gemini system: expect.stringContaining( 'CRITICAL: You MUST respond with ONLY valid JSON' ), - maxTokens: undefined, + maxOutputTokens: undefined, temperature: undefined });scripts/modules/config-manager.js (2)
723-731: Include claude-code in the unified no-API-key list to remove special-casing.Currently, claude-code is handled via a separate conditional. Add it to the providersWithoutApiKeys list and rely on the single check to avoid duplication.
Apply:
- // Claude Code doesn't require an API key - if (providerName?.toLowerCase() === 'claude-code') { - return true; // No API key needed - }Then ensure it's present in providersWithoutApiKeys export (see separate diff below).
As per coding guidelines
1021-1028: Add CLAUDE_CODE to providersWithoutApiKeys export.This keeps isApiKeySet and getMcpApiKeyStatus consistent and removes the need for special cases.
Apply:
export const providersWithoutApiKeys = [ CUSTOM_PROVIDERS.OLLAMA, CUSTOM_PROVIDERS.BEDROCK, CUSTOM_PROVIDERS.GEMINI_CLI, CUSTOM_PROVIDERS.GROK_CLI, - CUSTOM_PROVIDERS.MCP + CUSTOM_PROVIDERS.MCP, + CUSTOM_PROVIDERS.CLAUDE_CODE ];As per coding guidelines
src/ai-providers/base-provider.js (1)
231-239: Use prepareTokenParam in streamObject for consistency.Directly passing maxOutputTokens from params.maxTokens bypasses validation/normalization and diverges from other paths.
Apply:
- const result = await streamObject({ + const result = await streamObject({ model: client(params.modelId), messages: params.messages, schema: zodSchema(params.schema), mode: params.mode || 'auto', - maxOutputTokens: params.maxTokens, + ...this.prepareTokenParam(params.modelId, params.maxTokens), temperature: params.temperature });package.json (1)
55-101: Update provider versions for ai@5 compatibility
- ai-sdk-provider-claude-code: change ^1.1.4 → ^1.1.3 (1.1.4 unpublished)
- ollama-ai-provider-v2: change ^1.3.1 → ^0.0.8 (1.3.1 unpublished)
- @ai-sdk/amazon-bedrock: revert ^3.x → latest 2.x (v5 ecosystem requires 2.x series)
- @ai-sdk/openai@^2.x, @ai-sdk/google@^2.x, @ai-sdk/google-vertex@^3.x are confirmed v5-compatible
- Replace maxTokens with maxOutputTokens in all generateText/streamText/generateObject/streamObject calls for ai@5
tests/unit/ai-providers/openai.test.js (1)
5-56: Align the test suite with themaxOutputTokensrenameLine 5 still documents and Line 31 exercises the old
maxTokensoption, but every provider has shifted tomaxOutputTokens(see the new forwarding in the providers touched in this PR). Leaving the tests on the legacy field means we no longer assert the real contract and will miss regressions around the renamed option. Please update the descriptions and assertions to targetmaxOutputTokens.- * 1. Validation of maxTokens parameter + * 1. Validation of maxOutputTokens parameter @@ - it('should accept valid maxTokens values', () => { + it('should accept valid maxOutputTokens values', () => { expect(() => - provider.validateOptionalParams({ maxTokens: 1000 }) + provider.validateOptionalParams({ maxOutputTokens: 1000 }) ).not.toThrow(); expect(() => - provider.validateOptionalParams({ maxTokens: 1 }) + provider.validateOptionalParams({ maxOutputTokens: 1 }) ).not.toThrow(); expect(() => - provider.validateOptionalParams({ maxTokens: '1000' }) + provider.validateOptionalParams({ maxOutputTokens: '1000' }) ).not.toThrow(); }); @@ - it('should reject invalid maxTokens values', () => { - expect(() => provider.validateOptionalParams({ maxTokens: 0 })).toThrow( + it('should reject invalid maxOutputTokens values', () => { + expect(() => + provider.validateOptionalParams({ maxOutputTokens: 0 }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: -1 }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: NaN }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: Infinity }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: 'invalid' }) + ).toThrow(Error); - expect(() => provider.validateOptionalParams({ maxTokens: -1 })).toThrow( - Error - ); - expect(() => provider.validateOptionalParams({ maxTokens: NaN })).toThrow( - Error - ); - expect(() => - provider.validateOptionalParams({ maxTokens: Infinity }) - ).toThrow(Error); - expect(() => - provider.validateOptionalParams({ maxTokens: 'invalid' }) - ).toThrow(Error); });src/ai-providers/gemini-cli.js (1)
424-593: Forward the renamedmaxOutputTokensparameter correctlyLines 428, 533, and 592 pass
params.maxTokensinto the newmaxOutputTokensslot. After the v5 migration the request payload—and the validated input onBaseAIProvider—usemaxOutputTokens. Pulling from the old property silently drops caller-supplied limits. Please switch these reads toparams.maxOutputTokens.- maxOutputTokens: params.maxTokens, + maxOutputTokens: params.maxOutputTokens, @@ - maxOutputTokens: params.maxTokens, + maxOutputTokens: params.maxOutputTokens, @@ - maxOutputTokens: params.maxTokens, + maxOutputTokens: params.maxOutputTokens,src/ai-providers/ollama.js (1)
31-37: Fix base URL option for the v2 provider.
createOllamain the v2 package expects the option to be passed asbaseUrl. KeepingbaseURLmeans any non-default server URL will be silently ignored, so remote/self-hosted Ollama instances can no longer be targeted after this migration. Please normalize the param and forward it with the correct casing.- try { - const { baseURL } = params; - - return createOllama({ - ...(baseURL && { baseURL }) - }); + try { + const { baseUrl, baseURL } = params ?? {}; + const resolvedBaseUrl = baseUrl ?? baseURL; + + return createOllama({ + ...(resolvedBaseUrl && { baseUrl: resolvedBaseUrl }) + });
| const subtaskSchema = z.strictObject({ | ||
| id: z.int().positive().describe('Sequential subtask ID starting from 1'), | ||
| title: z.string().min(5).describe('Clear, specific title for the subtask'), | ||
| description: z | ||
| .string() | ||
| .min(10) | ||
| .describe('Detailed description of the subtask'), | ||
| dependencies: z | ||
| .array(z.string()) | ||
| .describe( | ||
| 'Array of subtask dependencies within the same parent task. Use format ["parentTaskId.1", "parentTaskId.2"]. Subtasks can only depend on siblings, not external tasks.' | ||
| ), | ||
| details: z.string().min(20).describe('Implementation details and guidance'), | ||
| status: z | ||
| .string() | ||
| .describe( | ||
| 'The current status of the subtask (should be pending initially)' | ||
| ), | ||
| testStrategy: z | ||
| .string() | ||
| .nullable() | ||
| .describe('Approach for testing this subtask') | ||
| .prefault('') | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Two checks: Zod features availability and cross‑module dependency type consistency.
- Verify z.strictObject and .prefault are supported in your Zod version/extension; otherwise this will throw at runtime.
- This schema enforces string dotted dependencies; update-task-by-id still validates subtasks with int dependencies and parses strings with parseInt, which corrupts "1.2". Align to strings or accept union + normalize.
Run to verify Zod version and prefault/strictObject availability and find the dependency parsing site:
🏁 Script executed:
#!/bin/bash
# Zod version(s)
rg -nP '"zod"\s*:\s*"[^\"]+"' -g "package.json" -g "**/package.json"
# prefault usage and potential polyfill/extension
rg -n "prefault\\(" -C2
rg -nP "(?i)(extend|augment).*zod|prefault.*prototype|Zod.*prefault" -C2
# strictObject usage
rg -n "strictObject\\(" -C2
# Locate parseInt on subtask deps (truncation risk)
rg -nP "dependencies[^\n]*map\\([^)]*\\)[\\s\\S]*parseInt\\(" scripts/modules/task-manager/update-task-by-id.js -C3Length of output: 6607
Unify subtask dependency types to string IDs
In expand-task.js your subtaskSchema uses z.array(z.string()) for dependencies (dotted IDs like “1.2”), but in update-task-by-id.js the subtasks’ dependencies schema is z.array(z.int()).nullable().prefault([]), so string IDs will be rejected. Change it to z.array(z.string()) (or a union of string/int with normalization) to match expand-task.js.
🤖 Prompt for AI Agents
In scripts/modules/task-manager/expand-task.js around lines 34-57 the
subtaskSchema currently declares dependencies as z.array(z.string()), which is
inconsistent with update-task-by-id.js (which expects
z.array(z.int()).nullable().prefault([])); update this schema to match the
canonical representation by using z.array(z.string()).nullable().prefault([])
or, if you prefer to accept both forms, use z.array(z.union([z.string(),
z.number()])).transform(arr => arr.map(String)).nullable().prefault([]) so
dependency IDs are normalized to strings and defaults align across both modules.
2ec61d8 to
788397d
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/ai-providers/base-provider.js (1)
232-239: UseprepareTokenParamwhen streaming objects
streamObjectnow bypassesprepareTokenParamand forwardsparams.maxTokensdirectly. That regresses the numeric normalisation we rely on (e.g., configs often provide strings), so the v5 client will receive an invalidmaxOutputTokens. Restore the shared helper:- mode: params.mode || 'auto', - maxOutputTokens: params.maxTokens, + mode: params.mode || 'auto', + ...this.prepareTokenParam(params.modelId, params.maxTokens),src/ai-providers/gemini-cli.js (1)
424-433: Restore max token cap after the v5 renameLine 428 maps
maxOutputTokenstoparams.maxTokens. After this migration, callers now populateparams.maxOutputTokens, so we silently drop the cap and let Gemini run unbounded. The same mismatch appears instreamText(Line 533) andgenerateObject(Line 592). Please forward whichever field is provided so we keep honoring the limit. Based on the AI-generated summary.- maxOutputTokens: params.maxTokens, + maxOutputTokens: + params.maxOutputTokens ?? params.maxTokens,Apply the identical fallback in the other two call sites as well.
tests/unit/ai-providers/gemini-cli.test.js (1)
432-438: Fix remainingmaxTokensreferencesWe renamed the surface to
maxOutputTokens, but these tests still populate/expectmaxTokens, so they’ll fail once the provider forwards the new field. Please update the fixtures and assertions to the new name.- maxTokens: undefined, + maxOutputTokens: undefined,- maxTokens: 100, + maxOutputTokens: 100,Also applies to: 503-505
| getRequiredApiKeyName() { | ||
| return 'CLAUDE_CODE_API_KEY'; | ||
| } |
There was a problem hiding this comment.
Return null for the API key name after the CLI migration
With this provider now authenticating exclusively via the Claude Code CLI/OAuth token, reporting 'CLAUDE_CODE_API_KEY' keeps every consumer of getRequiredApiKeyName() (env loaders, UI hints, docs) pointing at the legacy API-key flow. That regression will cause users to chase a non-existent variable. Please surface that no API key is needed.
getRequiredApiKeyName() {
- return 'CLAUDE_CODE_API_KEY';
+ return null;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getRequiredApiKeyName() { | |
| return 'CLAUDE_CODE_API_KEY'; | |
| } | |
| getRequiredApiKeyName() { | |
| return null; | |
| } |
🤖 Prompt for AI Agents
In src/ai-providers/claude-code.js around lines 40 to 42, the
getRequiredApiKeyName() method currently returns 'CLAUDE_CODE_API_KEY' which is
misleading after the CLI/OAuth migration; change the method to return null (or
explicitly no API key) so env loaders, UI hints and docs know no API key is
required, and adjust any inline comment to reflect the CLI/OAuth-only
authentication.
There was a problem hiding this comment.
@coderabbitai we want users to be able to give it a key if they don't want to use / don't have the oauth stuff. right now claude code is defaulting to grabbing ANTHROPIC_API_KEY, but we don't want it to do that, so I'm wondering if not defined (claude_Code_api_key), maybe we can unset ANTHROPIC_API_KEY before claude code picks it up ...
Though still trying to figure that out
There was a problem hiding this comment.
🧩 Analysis chain
@Crunchyman-ralph I understand the use case - you want Claude Code to support both OAuth (preferred) and manual API key fallback, but prevent it from auto-picking up ANTHROPIC_API_KEY. Let me investigate how the ai-sdk-provider-claude-code package handles API key precedence.
@Crunchyman-ralph I understand the use case - you want Claude Code to support both OAuth (preferred) and manual API key fallback, but prevent it from auto-picking up ANTHROPIC_API_KEY. Let me investigate how the ai-sdk-provider-claude-code package handles API key precedence.
Based on the investigation, here's what I understand and my recommendation:
@Crunchyman-ralph I see the challenge! Looking at your codebase, I can see that Claude Code is an external dependency (ai-sdk-provider-claude-code) that likely follows standard Anthropic patterns and picks up ANTHROPIC_API_KEY by default.
Here's a solution that gives you the flexibility you want:
Environment Variable Isolation Approach
You can implement environment variable management within your getClient() method to control exactly which API keys the Claude Code provider sees:
/**
* Creates a Claude Code client instance with controlled environment variable access
* @param {object} params - Client parameters
* @param {string} [params.commandName] - Command name for settings lookup
* @returns {Function} Claude Code provider function
* @throws {Error} If Claude Code CLI is not available or client creation fails
*/
getClient(params = {}) {
try {
const settings = getClaudeCodeSettingsForCommand(params.commandName) || {};
// Save original environment state
const originalAnthropicKey = process.env.ANTHROPIC_API_KEY;
let environmentRestored = false;
const restoreEnvironment = () => {
if (!environmentRestored) {
if (originalAnthropicKey) {
process.env.ANTHROPIC_API_KEY = originalAnthropicKey;
} else {
delete process.env.ANTHROPIC_API_KEY;
}
environmentRestored = true;
}
};
try {
// Control which API key Claude Code sees
if (process.env.CLAUDE_CODE_API_KEY) {
// User provided explicit Claude Code API key - use it
process.env.ANTHROPIC_API_KEY = process.env.CLAUDE_CODE_API_KEY;
} else {
// Force OAuth mode by unsetting ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_API_KEY;
}
const client = createClaudeCode({
defaultSettings: settings
});
// Restore environment immediately after client creation
restoreEnvironment();
return client;
} catch (clientError) {
restoreEnvironment();
throw clientError;
}
} catch (error) {
const msg = String(error?.message || '');
const code = error?.code;
if (code === 'ENOENT' || /claude/i.test(msg)) {
const enhancedError = new Error(
`Claude Code CLI not available. Please install Claude Code CLI first. Original error: ${error.message}`
);
enhancedError.cause = error;
this.handleError('Claude Code CLI initialization', enhancedError);
} else {
this.handleError('client initialization', error);
}
}
}Keep the API Key Name
And keep your getRequiredApiKeyName() method as is:
/**
* @returns {string} The environment variable name for API key
*/
getRequiredApiKeyName() {
return 'CLAUDE_CODE_API_KEY'; // Claude Code-specific to avoid conflicts
}How This Works
- OAuth by default: When no
CLAUDE_CODE_API_KEYis set, it temporarily unsetsANTHROPIC_API_KEYso Claude Code falls back to OAuth - API key when needed: When users set
CLAUDE_CODE_API_KEY=sk-ant-..., it gets passed through asANTHROPIC_API_KEYto Claude Code - No side effects: The original
ANTHROPIC_API_KEYis restored after client creation, so other providers aren't affected - Error safety: Environment is restored even if client creation fails
This gives users explicit control:
- Default behavior: OAuth authentication (clean, no API charges)
- Fallback option: Set
CLAUDE_CODE_API_KEYwhen OAuth isn't available - No conflicts: Won't interfere with
ANTHROPIC_API_KEYused by other providers
🏁 Script executed:
#!/bin/bash
# Search for information about API key handling in the codebase and dependencies
rg -nP --type=js --type=ts -C3 "ANTHROPIC_API_KEY|CLAUDE_CODE_API_KEY" --glob="!node_modules/**"
Length of output: 148
🏁 Script executed:
#!/bin/bash
# Check if ai-sdk-provider-claude-code is available and examine its API key handling
fd -e js -e ts -e json . | grep -i claude-code | head -10
Length of output: 175
🏁 Script executed:
#!/bin/bash
# Look at the package.json to understand the ai-sdk-provider-claude-code dependency
cat package.json | jq '.dependencies | to_entries[] | select(.key | contains("claude-code"))'
Length of output: 63
🏁 Script executed:
#!/bin/bash
# Search for environment variable usage patterns in AI providers
rg -nP --type=js -C2 "process\.env\." src/ai-providers/ | head -20
Length of output: 443
Now I have a much clearer picture. Let me examine the broader context and provide a solution:
@Crunchyman-ralph Based on my investigation of your codebase, I can see the situation now. The ai-sdk-provider-claude-code package (v1.1.4) is an external dependency, and looking at your current implementation, it only checks for CLAUDE_CODE_OAUTH_TOKEN but doesn't have explicit API key handling.
Here's what I understand and recommend:
The Issue
Standard Anthropic SDKs (including likely the ai-sdk-provider-claude-code package) automatically pick up ANTHROPIC_API_KEY from environment variables. This means if you have ANTHROPIC_API_KEY set for other providers, Claude Code will use it too, potentially causing unexpected API charges instead of using OAuth.
The Solution
Implement environment variable isolation to give you precise control:
/**
* Creates a Claude Code client instance with controlled API key access
* @param {object} params - Client parameters
* @param {string} [params.commandName] - Command name for settings lookup
* @returns {Function} Claude Code provider function
* @throws {Error} If Claude Code CLI is not available or client creation fails
*/
getClient(params = {}) {
try {
const settings = getClaudeCodeSettingsForCommand(params.commandName) || {};
// Environment variable isolation for API key control
const originalAnthropicKey = process.env.ANTHROPIC_API_KEY;
let environmentModified = false;
try {
if (process.env.CLAUDE_CODE_API_KEY) {
// User explicitly wants to use API key mode
process.env.ANTHROPIC_API_KEY = process.env.CLAUDE_CODE_API_KEY;
environmentModified = true;
} else if (originalAnthropicKey) {
// Prevent Claude Code from accidentally using ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_API_KEY;
environmentModified = true;
}
const client = createClaudeCode({
defaultSettings: settings
});
// Restore environment immediately after client creation
if (environmentModified) {
if (originalAnthropicKey) {
process.env.ANTHROPIC_API_KEY = originalAnthropicKey;
} else {
delete process.env.ANTHROPIC_API_KEY;
}
}
return client;
} catch (clientError) {
// Always restore environment on error
if (environmentModified && originalAnthropicKey) {
process.env.ANTHROPIC_API_KEY = originalAnthropicKey;
}
throw clientError;
}
} catch (error) {
const msg = String(error?.message || '');
const code = error?.code;
if (code === 'ENOENT' || /claude/i.test(msg)) {
const enhancedError = new Error(
`Claude Code CLI not available. Please install Claude Code CLI first. Original error: ${error.message}`
);
enhancedError.cause = error;
this.handleError('Claude Code CLI initialization', enhancedError);
} else {
this.handleError('client initialization', error);
}
}
}Keep the Current API Key Name
Your current approach is correct:
getRequiredApiKeyName() {
return 'CLAUDE_CODE_API_KEY'; // Claude Code-specific to avoid conflicts
}How This Works
- Default behavior: OAuth via
CLAUDE_CODE_OAUTH_TOKEN(no API charges) - Explicit API key mode: Users set
CLAUDE_CODE_API_KEY=sk-ant-...to opt into API key authentication - Isolation: Prevents accidental pickup of
ANTHROPIC_API_KEYmeant for other providers - Clean restoration: Original environment is always restored after client creation
This gives you the exact behavior you want:
- OAuth by default (clean, subscription-based)
- API key fallback when users explicitly opt in
- No conflicts with other Anthropic providers in your system
🏁 Script executed:
#!/bin/bash
# Let's search more broadly for API key patterns and examine the test files
rg -nP -C3 "API_KEY|api.*key|apiKey" src/ai-providers/ tests/ --type=js
Length of output: 42713
🏁 Script executed:
#!/bin/bash
# Check the claude-code test files for clues about API key handling
cat tests/unit/ai-providers/claude-code.test.js
Length of output: 3505
🏁 Script executed:
#!/bin/bash
# Look at other AI providers to see how they handle environment variables
rg -nP -C2 "getRequiredApiKeyName|ANTHROPIC" src/ai-providers/ --type=js
Length of output: 4537
🏁 Script executed:
#!/bin/bash
# Check if there are any configuration examples or documentation
fd -t f README.md docs/ | xargs grep -l -i "claude.*code\|api.*key" 2>/dev/null || echo "No docs found"
Length of output: 13
Perfect! Now I have the complete picture. Looking at your codebase, I can see:
- Test setup: Your
tests/setup.jssetsANTHROPIC_API_KEY='test-mock-api-key-for-tests' - Provider pattern: Other providers use specific API key names (
OPENAI_API_KEY,GROK_CLI_API_KEY, etc.) - Test expectations: Your tests show
ClaudeCodeProviderreturns'CLAUDE_CODE_API_KEY'andisRequiredApiKey() → false - Standard Anthropic: The regular
AnthropicProviderreturns'ANTHROPIC_API_KEY'
@Crunchyman-ralph Perfect! Now I understand the exact issue. The ai-sdk-provider-claude-code package likely follows standard Anthropic SDK patterns and will automatically pick up ANTHROPIC_API_KEY from the environment, which you want to prevent.
Here's the solution that gives you the precise control you need:
Environment Variable Isolation Solution
Update your getClient() method to temporarily manage the environment variables:
/**
* Creates a Claude Code client instance with controlled API key access
* @param {object} params - Client parameters
* @param {string} [params.commandName] - Command name for settings lookup
* @returns {Function} Claude Code provider function
* @throws {Error} If Claude Code CLI is not available or client creation fails
*/
getClient(params = {}) {
try {
const settings = getClaudeCodeSettingsForCommand(params.commandName) || {};
// Environment variable isolation to control API key access
const originalAnthropicKey = process.env.ANTHROPIC_API_KEY;
let environmentModified = false;
try {
if (process.env.CLAUDE_CODE_API_KEY) {
// User explicitly wants to use API key mode
process.env.ANTHROPIC_API_KEY = process.env.CLAUDE_CODE_API_KEY;
environmentModified = true;
} else if (originalAnthropicKey) {
// Prevent Claude Code from accidentally using ANTHROPIC_API_KEY
delete process.env.ANTHROPIC_API_KEY;
environmentModified = true;
}
const client = createClaudeCode({
defaultSettings: settings
});
// Restore environment immediately after client creation
if (environmentModified) {
if (originalAnthropicKey) {
process.env.ANTHROPIC_API_KEY = originalAnthropicKey;
} else {
delete process.env.ANTHROPIC_API_KEY;
}
}
return client;
} catch (clientError) {
// Always restore environment on error
if (environmentModified && originalAnthropicKey) {
process.env.ANTHROPIC_API_KEY = originalAnthropicKey;
}
throw clientError;
}
} catch (error) {
const msg = String(error?.message || '');
const code = error?.code;
if (code === 'ENOENT' || /claude/i.test(msg)) {
const enhancedError = new Error(
`Claude Code CLI not available. Please install Claude Code CLI first. Original error: ${error.message}`
);
enhancedError.cause = error;
this.handleError('Claude Code CLI initialization', enhancedError);
} else {
this.handleError('client initialization', error);
}
}
}Keep Your Current API Key Name
Your current approach is exactly right:
/**
* @returns {string} The environment variable name for API key
*/
getRequiredApiKeyName() {
return 'CLAUDE_CODE_API_KEY'; // Claude Code-specific to avoid conflicts
}How This Solves Your Problem
-
Default behavior: OAuth via
CLAUDE_CODE_OAUTH_TOKEN(no API charges, clean authentication) -
Explicit API key mode: When users set
CLAUDE_CODE_API_KEY=sk-ant-..., it temporarily becomesANTHROPIC_API_KEYjust for Claude Code client creation -
Prevents conflicts: Even if you have
ANTHROPIC_API_KEYset for your regular Anthropic provider, Claude Code won't see it unless the user explicitly opts in -
Clean restoration: Original environment state is always restored after client creation
Usage Examples
# OAuth mode (default) - no API charges
export CLAUDE_CODE_OAUTH_TOKEN=your-oauth-token
# No CLAUDE_CODE_API_KEY set -> uses OAuth
# API key mode (explicit opt-in) - API charges apply
export CLAUDE_CODE_API_KEY=sk-ant-your-api-key
# Claude Code will use this key instead of OAuth
# Mixed environment (common scenario)
export ANTHROPIC_API_KEY=sk-ant-for-regular-anthropic
export CLAUDE_CODE_OAUTH_TOKEN=oauth-for-claude-code
# Regular Anthropic uses ANTHROPIC_API_KEY
# Claude Code uses OAuth (ANTHROPIC_API_KEY is hidden from it)This gives you exactly the behavior you described: OAuth by default, with explicit API key fallback when users need it, and no accidental pickup of keys meant for other providers.
✏️ Learnings added
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.925Z
Learning: Claude Code provider should use CLAUDE_CODE_API_KEY environment variable instead of automatically picking up ANTHROPIC_API_KEY to avoid conflicts, since the ai-sdk-provider-claude-code package follows standard Anthropic patterns and prioritizes API keys over OAuth authentication, which can cause unexpected API charges.
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.925Z
Learning: Claude Code provider should use environment variable isolation to control API key access, temporarily managing ANTHROPIC_API_KEY during client creation to prevent the ai-sdk-provider-claude-code package from automatically picking up API keys intended for other providers, while allowing explicit CLAUDE_CODE_API_KEY usage as a fallback to OAuth authentication.
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: src/ai-providers/claude-code.js:40-42
Timestamp: 2025-09-29T13:33:46.925Z
Learning: Claude Code provider should use environment variable isolation to temporarily manage ANTHROPIC_API_KEY during client creation: if CLAUDE_CODE_API_KEY is set, temporarily set ANTHROPIC_API_KEY to that value; if CLAUDE_CODE_API_KEY is not set but ANTHROPIC_API_KEY exists, temporarily unset ANTHROPIC_API_KEY to force OAuth mode. This prevents the ai-sdk-provider-claude-code package from accidentally using API keys intended for the regular Anthropic provider while still allowing explicit API key usage as a fallback.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/AGENTS.md:0-0
Timestamp: 2025-09-24T15:12:58.855Z
Learning: Applies to assets/**/.env : Store provider API keys in .env for CLI usage; ensure at least one provider key (e.g., ANTHROPIC_API_KEY, PERPLEXITY_API_KEY, OPENAI_API_KEY, etc.) is set; research mode requires PERPLEXITY_API_KEY
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
packages/ai-sdk-provider-grok-cli/README.md (1)
148-166: Fix the development commands section.The README tells contributors to run
npm run devandnpm run build, but neither script exists in this package.json, so the workflow fails immediately. Update this section to reflect the actual commands we ship (typecheck,test, etc.) instead of referencing missing scripts.Apply this diff:
-# Start development mode (keep running during development) -npm run dev - -# Type check -npm run typecheck - -# Run tests (requires build first) -NODE_ENV=production npm run build -npm test +# Type check +npm run typecheck + +# Run tests +npm testpackage.json (1)
55-103: Keep Grok CLI provider as a devDependency.Per our bundling setup (tsdown pulls tm/* packages into the build), internal packages like
@tm/ai-sdk-provider-grok-clishould stay in devDependencies. Promoting them to runtime dependencies bloats the install surface and breaks the convention noted in earlier reviews. Please move it back underdevDependencies.packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts (1)
274-275: Drop shell escaping when usingspawnarguments
escapeShellArgreturns a single-quoted shell representation, so the spawned process now sees prompts like'Hello world'(or with\'littered through them) instead of the raw text. Becausespawnalready handles argument separation, keep the plain prompt string here to avoid corrupting every request.- const args = ['--prompt', escapeShellArg(prompt)]; + const args = ['--prompt', prompt];
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (23)
.changeset/sweet-maps-rule.md(1 hunks)package.json(4 hunks)packages/ai-sdk-provider-grok-cli/README.md(1 hunks)packages/ai-sdk-provider-grok-cli/package.json(1 hunks)packages/ai-sdk-provider-grok-cli/src/errors.test.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/errors.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts(16 hunks)packages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/grok-cli-provider.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/index.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/json-extractor.test.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/json-extractor.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/message-converter.test.ts(1 hunks)packages/ai-sdk-provider-grok-cli/src/message-converter.ts(7 hunks)packages/ai-sdk-provider-grok-cli/src/types.ts(1 hunks)packages/ai-sdk-provider-grok-cli/tsconfig.json(1 hunks)packages/tm-core/package.json(1 hunks)src/ai-providers/custom-sdk/grok-cli/errors.js(0 hunks)src/ai-providers/custom-sdk/grok-cli/index.js(0 hunks)src/ai-providers/custom-sdk/grok-cli/json-extractor.js(0 hunks)src/ai-providers/custom-sdk/grok-cli/types.js(0 hunks)src/ai-providers/grok-cli.js(1 hunks)tsconfig.json(1 hunks)
💤 Files with no reviewable changes (4)
- src/ai-providers/custom-sdk/grok-cli/types.js
- src/ai-providers/custom-sdk/grok-cli/index.js
- src/ai-providers/custom-sdk/grok-cli/json-extractor.js
- src/ai-providers/custom-sdk/grok-cli/errors.js
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
packages/ai-sdk-provider-grok-cli/src/errors.test.tspackages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackages/ai-sdk-provider-grok-cli/src/message-converter.test.tspackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.ts
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
packages/ai-sdk-provider-grok-cli/src/errors.test.tspackages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackages/ai-sdk-provider-grok-cli/src/message-converter.test.tspackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.ts
**/?(*.)+(spec|test).ts
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
In JavaScript/TypeScript projects using Jest, test files should match *.test.ts and *.spec.ts patterns
Files:
packages/ai-sdk-provider-grok-cli/src/errors.test.tspackages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackages/ai-sdk-provider-grok-cli/src/message-converter.test.tspackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.ts
**/*.test.ts
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
**/*.test.ts: Use established mocking patterns for dependencies such as bcrypt and Prisma in test files
Test all code paths, including edge cases and error scenarios, in test files
Use descriptive names for test functions, such as should_return_error_for_invalid_input
Files:
packages/ai-sdk-provider-grok-cli/src/errors.test.tspackages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackages/ai-sdk-provider-grok-cli/src/message-converter.test.tspackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.ts
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Files:
src/ai-providers/grok-cli.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
src/ai-providers/grok-cli.js
.changeset/*.md
📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)
.changeset/*.md: When runningnpm run changesetornpx changeset add, provide a concise summary of the changes for theCHANGELOG.mdin imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.
Files:
.changeset/sweet-maps-rule.md
.changeset/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.
Files:
.changeset/sweet-maps-rule.md
package.json
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Add and update test scripts in package.json to include test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci
Files:
package.json
🧠 Learnings (23)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: package.json:130-132
Timestamp: 2025-09-26T19:03:33.202Z
Learning: In the eyaltoledano/claude-task-master repository, packages are bundled using tsdown during the build process, which means dependencies imported by the source code (including tm internal packages like tm/ai-sdk-provider-grok-cli) are included in the final bundle and don't need to be available as separate runtime dependencies, so they should remain as devDependencies rather than being moved to dependencies.
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
packages/ai-sdk-provider-grok-cli/src/errors.test.tssrc/ai-providers/grok-cli.jspackages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackages/ai-sdk-provider-grok-cli/src/message-converter.test.tspackages/ai-sdk-provider-grok-cli/README.mdpackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.tspackages/ai-sdk-provider-grok-cli/package.jsonpackage.json
📚 Learning: 2025-09-26T19:03:33.202Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: package.json:130-132
Timestamp: 2025-09-26T19:03:33.202Z
Learning: In the eyaltoledano/claude-task-master repository, packages are bundled using tsdown during the build process, which means dependencies imported by the source code (including tm internal packages like tm/ai-sdk-provider-grok-cli) are included in the final bundle and don't need to be available as separate runtime dependencies, so they should remain as devDependencies rather than being moved to dependencies.
Applied to files:
packages/tm-core/package.jsonpackages/ai-sdk-provider-grok-cli/tsconfig.jsonsrc/ai-providers/grok-cli.jstsconfig.jsonpackages/ai-sdk-provider-grok-cli/package.jsonpackage.json
📚 Learning: 2025-09-26T19:10:32.889Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: tsconfig.json:22-28
Timestamp: 2025-09-26T19:10:32.889Z
Learning: In the eyaltoledano/claude-task-master repository, all internal tm/ package path mappings in tsconfig.json consistently point to TypeScript source files (e.g., "./packages/*/src/index.ts") rather than built JavaScript. This is intentional architecture because tsdown bundles internal packages directly from source during build time, eliminating the need for separate compilation of internal packages.
Applied to files:
packages/ai-sdk-provider-grok-cli/tsconfig.jsontsconfig.json
📚 Learning: 2025-09-26T19:05:47.526Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: packages/ai-sdk-provider-grok-cli/package.json:11-13
Timestamp: 2025-09-26T19:05:47.526Z
Learning: In the eyaltoledano/claude-task-master repository, internal tm/ packages use a specific export pattern where the "exports" field points to TypeScript source files (./src/index.ts) while "main" points to compiled output (./dist/index.js) and "types" points to source files (./src/index.ts). This pattern is used consistently across internal packages like tm/core and tm/ai-sdk-provider-grok-cli because they are consumed directly during build-time bundling with tsdown rather than being published as separate packages.
Applied to files:
packages/ai-sdk-provider-grok-cli/tsconfig.jsontsconfig.jsonpackages/ai-sdk-provider-grok-cli/package.jsonpackage.json
📚 Learning: 2025-09-03T12:16:15.866Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/package.json:13-64
Timestamp: 2025-09-03T12:16:15.866Z
Learning: For internal packages in the claude-task-master project, Crunchyman-ralph prefers pointing package.json "types" entries to src .ts files rather than dist .d.ts files for better developer experience (DX), as the packages are not being exported as SDKs.
Applied to files:
packages/ai-sdk-provider-grok-cli/tsconfig.json
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
src/ai-providers/grok-cli.jspackages/ai-sdk-provider-grok-cli/src/index.tspackages/ai-sdk-provider-grok-cli/README.mdpackages/ai-sdk-provider-grok-cli/package.jsonpackage.jsonpackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.ts
📚 Learning: 2025-09-24T15:46:28.029Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1114
File: src/ai-providers/gemini-cli.js:12-12
Timestamp: 2025-09-24T15:46:28.029Z
Learning: When AI SDK provider packages are moved from optional dependencies to required dependencies in package.json, static imports should be used instead of dynamic imports with error handling, as the package is guaranteed to be available at runtime.
Applied to files:
src/ai-providers/grok-cli.jspackage.json
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Applies to src/ai-providers/*.js : Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Applied to files:
src/ai-providers/grok-cli.jspackages/ai-sdk-provider-grok-cli/src/index.tspackages/ai-sdk-provider-grok-cli/README.mdpackages/ai-sdk-provider-grok-cli/package.jsonpackages/ai-sdk-provider-grok-cli/src/grok-cli-provider.ts
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Applied to files:
src/ai-providers/grok-cli.js
📚 Learning: 2025-09-17T19:09:08.882Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1211
File: jest.resolver.cjs:8-15
Timestamp: 2025-09-17T19:09:08.882Z
Learning: In the eyaltoledano/claude-task-master project, the team only uses .ts files and does not plan to use .tsx or .mts extensions, so Jest resolver and build tooling should focus on .js → .ts mapping only.
Applied to files:
tsconfig.json
📚 Learning: 2025-09-26T19:07:10.462Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: packages/ai-sdk-provider-grok-cli/package.json:21-35
Timestamp: 2025-09-26T19:07:10.462Z
Learning: In the eyaltoledano/claude-task-master repository, the tsdown build configuration uses `noExternal: [/^tm\//]` which means internal tm/ packages are bundled into the final output while external npm dependencies remain external and are resolved from the root package.json dependencies at runtime. This eliminates the need for peer dependencies in internal packages since the root package.json already provides the required external dependencies.
Applied to files:
tsconfig.jsonpackage.json
📚 Learning: 2025-08-03T12:13:33.875Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/test_workflow.mdc:0-0
Timestamp: 2025-08-03T12:13:33.875Z
Learning: Applies to package.json : Add and update test scripts in package.json to include test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci
Applied to files:
packages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackage.json
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock readJSON and writeJSON to avoid real file system interactions in tests.
Applied to files:
packages/ai-sdk-provider-grok-cli/src/json-extractor.test.ts
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
packages/ai-sdk-provider-grok-cli/src/json-extractor.test.ts
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test core logic independently with both data formats, mock file system operations, test tag resolution behavior, and verify migration compatibility in unit tests.
Applied to files:
packages/ai-sdk-provider-grok-cli/src/json-extractor.test.ts
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Applied to files:
packages/ai-sdk-provider-grok-cli/src/json-extractor.test.tspackages/ai-sdk-provider-grok-cli/src/message-converter.test.ts
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Applied to files:
packages/ai-sdk-provider-grok-cli/src/message-converter.test.ts
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Update relevant documentation files (such as README.md) to mention the new provider and its configuration.
Applied to files:
packages/ai-sdk-provider-grok-cli/README.md
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.
Applied to files:
packages/ai-sdk-provider-grok-cli/README.md
📚 Learning: 2025-07-31T20:49:04.638Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#997
File: apps/extension/package.publish.json:2-8
Timestamp: 2025-07-31T20:49:04.638Z
Learning: In the eyaltoledano/claude-task-master repository, the VS Code extension uses a 3-file packaging system where package.json (with name "extension") is for development within the monorepo, while package.publish.json (with name "task-master-hamster") contains the clean manifest for VS Code marketplace publishing. The different names are intentional and serve distinct purposes in the build and publishing workflow.
Applied to files:
packages/ai-sdk-provider-grok-cli/package.json
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
Applied to files:
.changeset/sweet-maps-rule.md
📚 Learning: 2025-08-07T13:00:22.966Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1090
File: apps/extension/package.json:241-243
Timestamp: 2025-08-07T13:00:22.966Z
Learning: In monorepos, local packages should use "*" as the version constraint in package.json dependencies, as recommended by npm. This ensures the local version from within the same workspace is always used, rather than attempting to resolve from external registries. This applies to packages like task-master-ai within the eyaltoledano/claude-task-master monorepo.
Applied to files:
package.json
🧬 Code graph analysis (9)
packages/ai-sdk-provider-grok-cli/src/errors.test.ts (2)
packages/ai-sdk-provider-grok-cli/src/errors.ts (8)
createAPICallError(59-83)createAuthenticationError(88-96)createTimeoutError(101-119)createInstallationError(124-135)isAuthenticationError(140-154)isTimeoutError(159-166)isInstallationError(171-175)getErrorMetadata(180-187)packages/ai-sdk-provider-grok-cli/src/index.ts (8)
createAPICallError(45-45)createAuthenticationError(46-46)createTimeoutError(47-47)createInstallationError(48-48)isAuthenticationError(41-41)isTimeoutError(42-42)isInstallationError(43-43)getErrorMetadata(44-44)
packages/ai-sdk-provider-grok-cli/src/errors.ts (1)
packages/ai-sdk-provider-grok-cli/src/types.ts (1)
GrokCliErrorMetadata(29-42)
packages/ai-sdk-provider-grok-cli/src/json-extractor.ts (2)
packages/ai-sdk-provider-grok-cli/src/index.ts (1)
extractJson(64-64)mcp-server/src/custom-sdk/json-extractor.js (3)
inString(32-32)escapeNext(33-33)char(37-37)
packages/ai-sdk-provider-grok-cli/src/message-converter.ts (1)
packages/ai-sdk-provider-grok-cli/src/types.ts (2)
GrokCliMessage(47-52)GrokCliResponse(57-71)
packages/ai-sdk-provider-grok-cli/src/json-extractor.test.ts (1)
packages/ai-sdk-provider-grok-cli/src/json-extractor.ts (1)
extractJson(14-132)
packages/ai-sdk-provider-grok-cli/src/message-converter.test.ts (1)
packages/ai-sdk-provider-grok-cli/src/message-converter.ts (4)
convertToGrokCliMessages(23-48)convertFromGrokCliResponse(55-112)createPromptFromMessages(119-139)escapeShellArg(146-153)
packages/ai-sdk-provider-grok-cli/src/grok-cli-provider.test.ts (2)
packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts (2)
provider(65-67)GrokCliLanguageModel(38-513)packages/ai-sdk-provider-grok-cli/src/grok-cli-provider.ts (2)
createGrokCli(52-102)grokCli(108-108)
packages/ai-sdk-provider-grok-cli/src/grok-cli-provider.ts (3)
src/ai-providers/grok-cli.js (1)
GrokCliProvider(10-79)packages/ai-sdk-provider-grok-cli/src/types.ts (2)
GrokCliModelId(24-24)GrokCliSettings(8-19)packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts (2)
GrokCliLanguageModel(38-513)provider(65-67)
packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts (4)
packages/ai-sdk-provider-grok-cli/src/types.ts (3)
GrokCliModelId(24-24)GrokCliSettings(8-19)GrokCliLanguageModelOptions(76-81)packages/ai-sdk-provider-grok-cli/src/errors.ts (2)
createInstallationError(124-135)createAPICallError(59-83)packages/ai-sdk-provider-grok-cli/src/json-extractor.ts (1)
extractJson(14-132)packages/ai-sdk-provider-grok-cli/src/message-converter.ts (1)
createPromptFromMessages(119-139)
🪛 markdownlint-cli2 (0.18.1)
.changeset/sweet-maps-rule.md
5-5: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (11)
tsconfig.json (1)
23-28: LGTM: path aliases for Grok CLI provider.Consistent with repo convention of mapping to src .ts for internal tm/* packages.
Based on learnings
packages/ai-sdk-provider-grok-cli/tsconfig.json (1)
6-10: Verify whether this package is actually compiled.If tsdown bundles from source and you don’t run tsc here, declaration/outDir/sourceMap are unused; consider dropping emit options or adding "noEmit": true. If you do build this package, keep as-is.
Based on learnings
Also applies to: 34-36
packages/tm-core/package.json (3)
37-42: No deprecatedvitest.workspaceusage detected
Your codebase doesn’t reference the old workspace API. Proceed with the v3 upgrade and review your Vitest config for any remaining deprecation warnings.
33-35: Verify Zod ^4.1.11 compatibility
Schemas found in packages/tm-core/src/repositories/supabase-task-repository.ts (lines 6–29):TaskStatusSchema,TaskUpdateSchema. Run schema-validation tests and update any deprecated Zod patterns or changed public types.
37-42: No action required: @biomejs/biome is already declared in the root devDependencies and will be hoisted for all workspaces, so the tm-core scripts will resolve the biome binary correctly..changeset/sweet-maps-rule.md (1)
1-5: Add missing changesets forpackages/tm-coreandmcp-server
Include a minor changeset for each; if any CLI flag names were renamed (e.g.--max-tokens→--max-output-tokens), mark as a major bump.⛔ Skipped due to learnings
Learnt from: Crunchyman-ralph PR: eyaltoledano/claude-task-master#1002 File: .changeset/puny-friends-give.md:2-3 Timestamp: 2025-07-17T21:33:57.585Z Learning: In the eyaltoledano/claude-task-master repository, the MCP server code in mcp-server/src/ is part of the main "task-master-ai" package, not a separate "mcp-server" package. When creating changesets for MCP server changes, use "task-master-ai" as the package name.Learnt from: CR PR: eyaltoledano/claude-task-master#0 File: .cursor/rules/glossary.mdc:0-0 Timestamp: 2025-07-18T17:10:53.657Z Learning: Guidelines for using Changesets (npm run changeset) to manage versioning and changelogs (changeset.mdc).Learnt from: Crunchyman-ralph PR: eyaltoledano/claude-task-master#1232 File: packages/build-config/package.json:14-15 Timestamp: 2025-09-22T19:45:13.323Z Learning: In the eyaltoledano/claude-task-master repository, Crunchyman-ralph intentionally omits version fields from internal packages (like tm/build-config) to prevent changesets from releasing new versions for these packages. This is the desired behavior for internal tooling packages that should not be published or versioned independently.Learnt from: CR PR: eyaltoledano/claude-task-master#0 File: .cursor/rules/changeset.mdc:0-0 Timestamp: 2025-07-18T17:07:53.100Z Learning: Applies to .changeset/*.md : When running `npm run changeset` or `npx changeset add`, provide a concise summary of the changes for the `CHANGELOG.md` in imperative mood, typically a single line, and not a detailed Git commit message.Learnt from: CR PR: eyaltoledano/claude-task-master#0 File: .cursor/rules/new_features.mdc:0-0 Timestamp: 2025-07-18T17:12:57.903Z Learning: Applies to .changeset/* : Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.Learnt from: eyaltoledano PR: eyaltoledano/claude-task-master#1069 File: .changeset/floppy-news-buy.md:7-38 Timestamp: 2025-08-02T14:54:52.216Z Learning: For major feature additions like new CLI commands, eyaltoledano prefers detailed changesets with comprehensive descriptions, usage examples, and feature explanations rather than minimal single-line summaries.packages/ai-sdk-provider-grok-cli/src/json-extractor.test.ts (1)
57-61: LGTM on failure fallbackThe “return original text when parsing fails” path is correctly asserted.
packages/ai-sdk-provider-grok-cli/src/json-extractor.ts (1)
47-55: Confirm output formatting contract (pretty vs minified vs “as-is”)Current implementation pretty-prints with 2-space indentation. Tests initially assumed exact echo. Decide and document the contract; align tests accordingly (recommended: canonical pretty-printed output).
packages/ai-sdk-provider-grok-cli/src/message-converter.ts (2)
146-153: Shell escaping is POSIX-only; verify runtime shellThis quoting won’t be correct on Windows cmd.exe/PowerShell. Confirm CLI always runs in POSIX shells or add a Windows branch.
23-48: LGTM: message normalization covers common AI SDK shapesText parts are concatenated and objects fall back to .text or JSON-stringified payloads. Trimming is appropriate.
packages/ai-sdk-provider-grok-cli/package.json (1)
21-35: Move provider packages out ofdependencies.With tsdown bundling these internal packages, we keep provider modules in
peerDependencies(so the host supplies them) and indevDependencies(for local builds/tests). Listing them underdependenciesduplicates installations and breaks the established workspace pattern. Shift them back intodevDependenciesand drop the redundantdependenciesblock.Apply this diff:
- "dependencies": { - "@ai-sdk/provider": "^2.0.0", - "@ai-sdk/provider-utils": "^3.0.10", - "jsonc-parser": "^3.3.1" - }, - "devDependencies": { - "@types/node": "^22.18.6", - "typescript": "^5.9.2", - "vitest": "^3.2.4" - }, + "devDependencies": { + "@ai-sdk/provider": "^2.0.0", + "@ai-sdk/provider-utils": "^3.0.10", + "jsonc-parser": "^3.3.1", + "@types/node": "^22.18.6", + "typescript": "^5.9.2", + "vitest": "^3.2.4" + },Based on learnings
⛔ Skipped due to learnings
Learnt from: Crunchyman-ralph PR: eyaltoledano/claude-task-master#1252 File: package.json:130-132 Timestamp: 2025-09-26T19:03:33.202Z Learning: In the eyaltoledano/claude-task-master repository, packages are bundled using tsdown during the build process, which means dependencies imported by the source code (including tm internal packages like tm/ai-sdk-provider-grok-cli) are included in the final bundle and don't need to be available as separate runtime dependencies, so they should remain as devDependencies rather than being moved to dependencies.Learnt from: Crunchyman-ralph PR: eyaltoledano/claude-task-master#1114 File: src/ai-providers/gemini-cli.js:12-12 Timestamp: 2025-09-24T15:46:28.029Z Learning: When AI SDK provider packages are moved from optional dependencies to required dependencies in package.json, static imports should be used instead of dynamic imports with error handling, as the package is guaranteed to be available at runtime.Learnt from: Crunchyman-ralph PR: eyaltoledano/claude-task-master#1252 File: packages/ai-sdk-provider-grok-cli/package.json:21-35 Timestamp: 2025-09-26T19:07:10.462Z Learning: In the eyaltoledano/claude-task-master repository, the tsdown build configuration uses `noExternal: [/^tm\//]` which means internal tm/ packages are bundled into the final output while external npm dependencies remain external and are resolved from the root package.json dependencies at runtime. This eliminates the need for peer dependencies in internal packages since the root package.json already provides the required external dependencies.Learnt from: CR PR: eyaltoledano/claude-task-master#0 File: .cursor/rules/ai_providers.mdc:0-0 Timestamp: 2025-07-18T17:06:04.909Z Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.Learnt from: Crunchyman-ralph PR: eyaltoledano/claude-task-master#1252 File: tsconfig.json:22-28 Timestamp: 2025-09-26T19:10:32.889Z Learning: In the eyaltoledano/claude-task-master repository, all internal tm/ package path mappings in tsconfig.json consistently point to TypeScript source files (e.g., "./packages/*/src/index.ts") rather than built JavaScript. This is intentional architecture because tsdown bundles internal packages directly from source during build time, eliminating the need for separate compilation of internal packages.Learnt from: CR PR: eyaltoledano/claude-task-master#0 File: .cursor/rules/ai_providers.mdc:0-0 Timestamp: 2025-07-18T17:06:04.909Z Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.Learnt from: CR PR: eyaltoledano/claude-task-master#0 File: .cursor/rules/architecture.mdc:0-0 Timestamp: 2025-07-18T17:07:39.336Z Learning: Applies to src/ai-providers/*.js : Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts
Outdated
Show resolved
Hide resolved
| it('should convert array content messages', () => { | ||
| const messages = [ | ||
| { | ||
| role: 'user', | ||
| content: [ | ||
| { type: 'text', text: 'Hello' }, | ||
| { type: 'text', text: 'World' } | ||
| ] | ||
| } | ||
| ]; | ||
|
|
||
| const result = convertToGrokCliMessages(messages); | ||
|
|
||
| expect(result).toEqual([{ role: 'user', content: 'Hello\nWorld' }]); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add a case with non-text parts to assert filtering.
Include an array content message that mixes text and non-text (e.g., image) parts to verify non-text is dropped.
As per coding guidelines
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/src/message-converter.test.ts around lines
28-42, add a test case that supplies an array content message containing a mix
of text and non-text parts (e.g., text and image objects) and assert that
convertToGrokCliMessages filters out non-text parts and concatenates only the
text parts with newlines; implement this by creating a messages array with
content like [{ type: 'text', text: 'A' }, { type: 'image', url: 'x' }, { type:
'text', text: 'B' }], calling convertToGrokCliMessages, and expecting the
resulting message content to equal 'A\nB'.
Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
c35468b to
7a3854b
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
tests/unit/ai-providers/openai.test.js (1)
5-77: Update tests to exercisemaxOutputTokensAPIThe provider migration standardizes on
maxOutputTokens, but this suite still validates the deprecatedmaxTokensoption. Align the test descriptions and assertions so we cover the new parameter name.- * 1. Validation of maxTokens parameter + * 1. Validation of maxOutputTokens parameter @@ - it('should accept valid maxTokens values', () => { + it('should accept valid maxOutputTokens values', () => { expect(() => - provider.validateOptionalParams({ maxTokens: 1000 }) + provider.validateOptionalParams({ maxOutputTokens: 1000 }) ).not.toThrow(); expect(() => - provider.validateOptionalParams({ maxTokens: 1 }) + provider.validateOptionalParams({ maxOutputTokens: 1 }) ).not.toThrow(); expect(() => - provider.validateOptionalParams({ maxTokens: '1000' }) + provider.validateOptionalParams({ maxOutputTokens: '1000' }) ).not.toThrow(); }); - it('should reject invalid maxTokens values', () => { - expect(() => provider.validateOptionalParams({ maxTokens: 0 })).toThrow( + it('should reject invalid maxOutputTokens values', () => { + expect(() => + provider.validateOptionalParams({ maxOutputTokens: 0 }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: -1 }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: NaN }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: Infinity }) + ).toThrow(Error); + expect(() => + provider.validateOptionalParams({ maxOutputTokens: 'invalid' }) + ).toThrow(Error); + }); - - Error - ); - expect(() => provider.validateOptionalParams({ maxTokens: -1 })).toThrow( - Error - ); - expect(() => - provider.validateOptionalParams({ maxTokens: NaN }) - ).toThrow(Error); - expect(() => - provider.validateOptionalParams({ maxTokens: Infinity }) - ).toThrow(Error); - expect(() => - provider.validateOptionalParams({ maxTokens: 'invalid' }) - ).toThrow(Error); - });package.json (1)
55-103: Fix the nonexistentollama-ai-provider-v2version
^1.3.1isn’t published on npm, so installs will fail. The latest release as of September 29, 2025 is0.0.9(with an optional1.0.0-alpha.2tag). Please bump to an actually published range before merging. (npmjs.com)- "ollama-ai-provider-v2": "^1.3.1", + "ollama-ai-provider-v2": "^0.0.9",tests/unit/ai-providers/claude-code.test.js (1)
110-124: Keep unit tests synchronous per project rules.This spec is marked
asyncand awaits a dynamic import, violating the repo’s “no async tests” guideline. Import the mocked module once at file scope (after thejest.unstable_mockModulecall) and make the test synchronous—for example:- it('should handle client initialization errors', async () => { - // Force an error by making createClaudeCode throw - const { createClaudeCode } = await import('ai-sdk-provider-claude-code'); + const { createClaudeCode } = await import('ai-sdk-provider-claude-code'); + + it('should handle client initialization errors', () => { createClaudeCode.mockImplementationOnce(() => { throw new Error('Mock initialization error'); });This keeps the test synchronous while still exercising the failure path.
src/ai-providers/base-provider.js (2)
129-144: prepareTokenParam: unused arg + stale JSDoc; keep signature but mark unused and update docs
- modelId is no longer used.
- JSDoc still mentions “either maxTokens or max_completion_tokens”, which is outdated.
Apply:
- * @param {string} modelId - The model ID - * @param {number} maxTokens - The maximum tokens value - * @returns {object} Object with either maxTokens or max_completion_tokens + * @param {string} _modelId - (unused) model ID + * @param {number} maxTokens - Maximum output tokens + * @returns {object} { maxOutputTokens } @@ -prepareTokenParam(modelId, maxTokens) { +prepareTokenParam(_modelId, maxTokens) {
231-239: Normalize maxOutputTokens via prepareTokenParam in all providers
Raw uses ofmaxOutputTokens: params.maxTokensin base-provider.js (line 237) and gemini-cli.js (lines 428, 532, 591) need to be replaced with...this.prepareTokenParam(params.modelId, params.maxTokens)to ensure proper coercion. No legacymax_completion_tokensremain.
| const updatedTaskSchema = z.object({ | ||
| id: z.int(), | ||
| title: z.string(), // Title should be preserved, but check it exists | ||
| description: z.string(), | ||
| status: z.string(), | ||
| dependencies: z.array(z.union([z.int(), z.string()])), | ||
| priority: z.string().nullable().prefault('medium'), | ||
| details: z.string().nullable().prefault(''), | ||
| testStrategy: z.string().nullable().prefault(''), | ||
| subtasks: z | ||
| .array( | ||
| z.object({ | ||
| id: z | ||
| .int() | ||
| .positive() | ||
| .describe('Sequential subtask ID starting from 1'), | ||
| title: z.string(), | ||
| description: z.string(), | ||
| status: z.string(), | ||
| dependencies: z.array(z.int()).nullable().prefault([]), | ||
| details: z.string().nullable().prefault(''), | ||
| testStrategy: z.string().nullable().prefault('') | ||
| }) | ||
| ) | ||
| .nullable() | ||
| .prefault([]) | ||
| }); // Allows parsing even if AI adds extra fields, but validation focuses on schema |
There was a problem hiding this comment.
Keep the updated-task schema strict
With the move back to plain z.object, any extra keys from the AI payload survive validation and get written out, which contradicts the task schema contract. Please switch this back to a strict object so we drop unknown fields before persisting.
-const updatedTaskSchema = z.object({
+const updatedTaskSchema = z.strictObject({
id: z.int(),
title: z.string(), // Title should be preserved, but check it exists
description: z.string(),
status: z.string(),
dependencies: z.array(z.union([z.int(), z.string()])),
priority: z.string().nullable().prefault('medium'),
details: z.string().nullable().prefault(''),
testStrategy: z.string().nullable().prefault(''),
subtasks: zBased on learnings
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const updatedTaskSchema = z.object({ | |
| id: z.int(), | |
| title: z.string(), // Title should be preserved, but check it exists | |
| description: z.string(), | |
| status: z.string(), | |
| dependencies: z.array(z.union([z.int(), z.string()])), | |
| priority: z.string().nullable().prefault('medium'), | |
| details: z.string().nullable().prefault(''), | |
| testStrategy: z.string().nullable().prefault(''), | |
| subtasks: z | |
| .array( | |
| z.object({ | |
| id: z | |
| .int() | |
| .positive() | |
| .describe('Sequential subtask ID starting from 1'), | |
| title: z.string(), | |
| description: z.string(), | |
| status: z.string(), | |
| dependencies: z.array(z.int()).nullable().prefault([]), | |
| details: z.string().nullable().prefault(''), | |
| testStrategy: z.string().nullable().prefault('') | |
| }) | |
| ) | |
| .nullable() | |
| .prefault([]) | |
| }); // Allows parsing even if AI adds extra fields, but validation focuses on schema | |
| const updatedTaskSchema = z.strictObject({ | |
| id: z.int(), | |
| title: z.string(), // Title should be preserved, but check it exists | |
| description: z.string(), | |
| status: z.string(), | |
| dependencies: z.array(z.union([z.int(), z.string()])), | |
| priority: z.string().nullable().prefault('medium'), | |
| details: z.string().nullable().prefault(''), | |
| testStrategy: z.string().nullable().prefault(''), | |
| subtasks: z | |
| .array( | |
| z.object({ | |
| id: z | |
| .int() | |
| .positive() | |
| .describe('Sequential subtask ID starting from 1'), | |
| title: z.string(), | |
| description: z.string(), | |
| status: z.string(), | |
| dependencies: z.array(z.int()).nullable().prefault([]), | |
| details: z.string().nullable().prefault(''), | |
| testStrategy: z.string().nullable().prefault('') | |
| }) | |
| ) | |
| .nullable() | |
| .prefault([]) | |
| }); // Allows parsing even if AI adds extra fields, but validation focuses on schema |
🤖 Prompt for AI Agents
In scripts/modules/task-manager/update-task-by-id.js around lines 36 to 62, the
z.object schemas are not strict so unknown fields from the AI payload pass
validation and get persisted; update the top-level updatedTaskSchema to be
strict (e.g. call .strict() on the task object) and also make the nested subtask
object schema strict so extra keys on subtasks are dropped before persisting;
keep all existing field definitions and defaults unchanged, only add .strict()
to the relevant z.object() calls.
| // Mock CLI failure scenario | ||
| jest.unstable_mockModule('ai-sdk-provider-claude-code', () => ({ | ||
| createClaudeCode: jest.fn(() => { | ||
| throw new Error('Claude Code CLI not found'); | ||
| }) | ||
| })); | ||
|
|
||
| // Import the provider after mocking |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Mock config-manager to avoid unintended I/O during getClient()
getClient reads settings via config-manager before failing on the mocked CLI. To keep tests hermetic, mock the settings module too.
Apply:
// Mock CLI failure scenario
jest.unstable_mockModule('ai-sdk-provider-claude-code', () => ({
createClaudeCode: jest.fn(() => {
throw new Error('Claude Code CLI not found');
})
}));
+
+// Mock config-manager lookup to avoid fs/env coupling
+jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({
+ getClaudeCodeSettingsForCommand: jest.fn(() => ({}))
+}));Based on learnings.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Mock CLI failure scenario | |
| jest.unstable_mockModule('ai-sdk-provider-claude-code', () => ({ | |
| createClaudeCode: jest.fn(() => { | |
| throw new Error('Claude Code CLI not found'); | |
| }) | |
| })); | |
| // Import the provider after mocking | |
| // Mock CLI failure scenario | |
| jest.unstable_mockModule('ai-sdk-provider-claude-code', () => ({ | |
| createClaudeCode: jest.fn(() => { | |
| throw new Error('Claude Code CLI not found'); | |
| }) | |
| })); | |
| // Mock config-manager lookup to avoid fs/env coupling | |
| jest.unstable_mockModule('../../scripts/modules/config-manager.js', () => ({ | |
| getClaudeCodeSettingsForCommand: jest.fn(() => ({})) | |
| })); | |
| // Import the provider after mocking |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
scripts/modules/task-manager/scope-adjustment.js (2)
369-378: Prefer generateTextService + manual JSON parse for structured outputPer guidelines, generateObjectService has shown reliability issues across providers/schemas. Switch to generateTextService and reuse the robust parsing pattern you have in update-task-by-id/update-tasks with Zod validation against subtaskSchema.
Based on learnings
622-724: Surface regen telemetry and show CLI usage summary
- regenerateSubtasksForComplexity doesn’t return telemetry; scopeUpTask/scopeDownTask only aggregate from adjustTaskComplexity. Add telemetryData to the regen result and aggregate it alongside adjust telemetry. If outputFormat === 'text' and telemetry exists, call displayAiUsageSummary.
Apply:
@@ -import { - generateObjectService, - generateTextService -} from '../ai-services-unified.js'; +import { + generateObjectService, + generateTextService +} from '../ai-services-unified.js'; +import { displayAiUsageSummary } from '../ui.js'; @@ - const aiResult = await generateObjectService({ + const aiResult = await generateObjectService({ role: context.research ? 'research' : 'main', session: context.session, @@ - const generatedSubtasks = aiResult.mainResult.subtasks || []; + const generatedSubtasks = aiResult.mainResult.subtasks || []; @@ - return { + return { updatedTask: task, regenerated: true, preserved: preservedSubtasks.length, - generated: normalizedGeneratedSubtasks.length + generated: normalizedGeneratedSubtasks.length, + telemetryData: aiResult.telemetryData }; @@ - let combinedTelemetryData = null; + let combinedTelemetryData = null; @@ - // Combine telemetry data - if (adjustResult.telemetryData) { + // Combine telemetry data (adjust + regen) + const accumulate = (td) => { + if (!td) return; + if (!combinedTelemetryData) combinedTelemetryData = { ...td }; + else { + combinedTelemetryData.inputTokens += td.inputTokens || 0; + combinedTelemetryData.outputTokens += td.outputTokens || 0; + combinedTelemetryData.totalTokens += td.totalTokens || 0; + combinedTelemetryData.totalCost += td.totalCost || 0; + } + }; + if (adjustResult.telemetryData) accumulate(adjustResult.telemetryData); + if (subtaskResult.telemetryData) accumulate(subtaskResult.telemetryData); @@ // Write updated data writeJSON(tasksPath, data, projectRoot, tag); if (outputFormat === 'text') { log('info', `Successfully scoped up ${updatedTasks.length} task(s)`); + if (combinedTelemetryData) { + displayAiUsageSummary(combinedTelemetryData, 'cli'); + } } @@ if (outputFormat === 'text') { log('info', `Successfully scoped down ${updatedTasks.length} task(s)`); + if (combinedTelemetryData) { + displayAiUsageSummary(combinedTelemetryData, 'cli'); + } }Based on learnings
Also applies to: 768-874
scripts/modules/config-manager.js (2)
773-855: Align MCP key-status withisApiKeySet; normalize provider and early‑return for CLI/non‑API providers.
getMcpApiKeyStatusdoesn’t early‑return forgrok-cli,gemini-cli, ormcp, so they’ll show “missing key” despite not requiring one.- Provider name isn’t normalized before the
switch, risking mismatches.placeholderValueis assigned but never used; placeholder detection is too narrow.Proposed fix:
As per coding guidelines
function getMcpApiKeyStatus(providerName, projectRoot = null) { @@ - try { + try { + const p = providerName?.toLowerCase?.(); + if (!p) return false; + // Providers not requiring API keys (parity with isApiKeySet + claude-code) + if ( + p === 'claude-code' || + ['ollama', 'grok-cli', 'gemini-cli', 'mcp', 'bedrock'].includes(p) + ) { + return true; + } @@ - switch (providerName) { + switch (p) { @@ - case 'ollama': - return true; // No key needed - case 'claude-code': - return true; // No key needed + // (no-op: handled by early-return) @@ - return !!apiKeyToCheck && !/KEY_HERE$/.test(apiKeyToCheck); + if (!apiKeyToCheck || apiKeyToCheck.trim() === '') return false; + if (placeholderValue && apiKeyToCheck === placeholderValue) return false; + return !/(^YOUR_.*_API_KEY_HERE$|KEY_HERE$)/.test(apiKeyToCheck);
1-20: Reminder: Restart MCP server after merging.This module underpins MCP behavior; per project guidance, restart the MCP server so schema/logic changes (e.g., key checks, settings parsing) take effect.
packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts (2)
124-151: Harden timeout/settle logic to avoid double-settle.If the timeout fires then exit/error later resolves, the promise can attempt multiple settlements. Guard with a settled flag and clear timers once.
- return new Promise((resolve, reject) => { + return new Promise((resolve, reject) => { + let settled = false; + const safeResolve = (v:any) => { if (!settled) { settled = true; if (timeoutId) clearTimeout(timeoutId); resolve(v); } }; + const safeReject = (e:any) => { if (!settled) { settled = true; if (timeoutId) clearTimeout(timeoutId); reject(e); } }; @@ - if (timeout > 0) { - timeoutId = setTimeout(() => { - child.kill('SIGTERM'); - reject( + if (timeout > 0) { + timeoutId = setTimeout(() => { + child.kill('SIGTERM'); + safeReject( createTimeoutError({ message: `Grok CLI command timed out after ${timeout}ms`, timeoutMs: timeout, promptExcerpt: args.join(' ').substring(0, 200) }) ); }, timeout); } @@ - child.on('error', (error) => { - if (timeoutId) clearTimeout(timeoutId); + child.on('error', (error) => { if ((error as any).code === 'ENOENT') { - reject(createInstallationError({})); + safeReject(createInstallationError({})); } else { - reject( + safeReject( createAPICallError({ message: `Failed to execute Grok CLI: ${error.message}`, code: (error as any).code, stderr: error.message, isRetryable: false }) ); } }); @@ - child.on('exit', (exitCode) => { - if (timeoutId) clearTimeout(timeoutId); - resolve({ + child.on('exit', (exitCode) => { + safeResolve({ stdout: stdout.trim(), stderr: stderr.trim(), exitCode: exitCode || 0 }); });Also applies to: 162-187
279-284: Don’t escape shell args when calling spawn; pass the prompt raw
With spawn defaulting toshell: false, escaping adds literal quotes/backslashes and can corrupt the prompt. Update the import and args:-import { - convertFromGrokCliResponse, - createPromptFromMessages, - escapeShellArg -} from './message-converter.js'; +import { convertFromGrokCliResponse, createPromptFromMessages } from './message-converter.js'; - const args = ['--prompt', escapeShellArg(prompt)]; + const args = ['--prompt', prompt];
| ### Authentication Priority | ||
|
|
||
| Task Master will attempt authentication in this order: | ||
|
|
||
| 1. **Environment Variable** (optional): `CLAUDE_CODE_OAUTH_TOKEN` | ||
| - Useful for CI/CD environments or when you want to override the default token | ||
| - Not required if you've set up the CLI token | ||
|
|
||
| 2. **Claude Code CLI Token** (recommended): Token managed by `claude setup-token` | ||
| - Automatically used when available | ||
| - Most convenient for local development | ||
|
|
||
| 3. **Fallback**: Error if neither is available |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Correct auth precedence and call out ANTHROPIC_API_KEY caveat.
Provider may prefer API keys over OAuth; document explicit order and warn about accidental pickup of ANTHROPIC_API_KEY.
-### Authentication Priority
-
-Task Master will attempt authentication in this order:
-
-1. **Environment Variable** (optional): `CLAUDE_CODE_OAUTH_TOKEN`
- - Useful for CI/CD environments or when you want to override the default token
- - Not required if you've set up the CLI token
-
-2. **Claude Code CLI Token** (recommended): Token managed by `claude setup-token`
- - Automatically used when available
- - Most convenient for local development
-
-3. **Fallback**: Error if neither is available
+### Authentication Priority
+
+Task Master/AI SDK will attempt authentication in this order:
+
+1. **API key (explicit)**: `CLAUDE_CODE_API_KEY`
+ - Use only if you intentionally want key‑based auth (bills API usage).
+ - Overrides token‑based paths.
+2. **OAuth token (explicit)**: `CLAUDE_CODE_OAUTH_TOKEN`
+ - Recommended for CI/CD when you can’t rely on local CLI storage.
+3. **Claude Code CLI token (recommended for local)**: Managed by `claude setup-token`
+ - Automatically discovered by the provider.
+4. **Fallback**: Error if none are available.
+
+Note: If `ANTHROPIC_API_KEY` is present in the environment, some provider stacks may pick it up as an API key. Prefer `CLAUDE_CODE_API_KEY` for Claude Code, or unset `ANTHROPIC_API_KEY` for processes using Claude Code to avoid unexpected API-key billing.Based on learnings.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Authentication Priority | |
| Task Master will attempt authentication in this order: | |
| 1. **Environment Variable** (optional): `CLAUDE_CODE_OAUTH_TOKEN` | |
| - Useful for CI/CD environments or when you want to override the default token | |
| - Not required if you've set up the CLI token | |
| 2. **Claude Code CLI Token** (recommended): Token managed by `claude setup-token` | |
| - Automatically used when available | |
| - Most convenient for local development | |
| 3. **Fallback**: Error if neither is available | |
| ### Authentication Priority | |
| Task Master/AI SDK will attempt authentication in this order: | |
| 1. **API key (explicit)**: `CLAUDE_CODE_API_KEY` | |
| - Use only if you intentionally want key-based auth (bills API usage). | |
| - Overrides token-based paths. | |
| 2. **OAuth token (explicit)**: `CLAUDE_CODE_OAUTH_TOKEN` | |
| - Recommended for CI/CD when you can’t rely on local CLI storage. | |
| 3. **Claude Code CLI token (recommended for local)**: Managed by `claude setup-token` | |
| - Automatically discovered by the provider. | |
| 4. **Fallback**: Error if none are available. | |
| Note: If `ANTHROPIC_API_KEY` is present in the environment, some provider stacks may pick it up as an API key. Prefer `CLAUDE_CODE_API_KEY` for Claude Code, or unset `ANTHROPIC_API_KEY` for processes using Claude Code to avoid unexpected API-key billing. |
🤖 Prompt for AI Agents
In docs/claude-code-integration.md around lines 35–47, update the Authentication
Priority section to explicitly state the correct precedence and warn about
ANTHROPIC_API_KEY: list the exact order (1. Environment variable
CLAUDE_CODE_OAUTH_TOKEN, 2. Claude Code CLI token from claude setup-token, 3.
Fallback to ANTHROPIC_API_KEY or error), and add a short caution that some
providers will prefer an API key (ANTHROPIC_API_KEY) over OAuth tokens so users
should unset or avoid exporting ANTHROPIC_API_KEY when intending to use the
OAuth token, or document how to override/force the desired credential.
| ### Environment Variables (Optional) | ||
|
|
||
| While not required, you can optionally set: | ||
|
|
||
| ```bash | ||
| export CLAUDE_CODE_OAUTH_TOKEN="your_oauth_token_here" | ||
| ``` | ||
|
|
||
| This is only needed in specific scenarios like: |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Document CLAUDE_CODE_API_KEY and env isolation (unset ANTHROPIC_API_KEY).
Add API-key option (opt‑in) and show how to isolate env to avoid accidental Anthropic key pickup.
-### Environment Variables (Optional)
-
-While not required, you can optionally set:
-
-```bash
-export CLAUDE_CODE_OAUTH_TOKEN="your_oauth_token_here"
-```
+### Environment Variables (Optional)
+
+While not required, you can optionally set:
+
+```bash
+# Prefer OAuth in CI/CD
+export CLAUDE_CODE_OAUTH_TOKEN="your_oauth_token_here"
+```
+
+Or, if you intentionally want to use API keys:
+
+```bash
+export CLAUDE_CODE_API_KEY="your_api_key_here"
+```
+
+To avoid accidental API‑key usage via Anthropic defaults:
+
+```bash
+# Recommended when using Claude Code to force token-based auth
+unset ANTHROPIC_API_KEY
+```Based on learnings.
🤖 Prompt for AI Agents
In docs/claude-code-integration.md around lines 71 to 79, the environment
variables section only shows CLAUDE_CODE_OAUTH_TOKEN and omits the opt‑in API
key option and guidance to avoid accidentally picking up an Anthropic key;
update the section to present both authentication options (prefer OAuth in CI/CD
with CLAUDE_CODE_OAUTH_TOKEN, or opt‑in API key via CLAUDE_CODE_API_KEY) and add
a short note and command showing how to unset ANTHROPIC_API_KEY to isolate the
environment and prevent accidental key usage.
| const result = await generateText({ | ||
| model: client('sonnet'), | ||
| messages: [ | ||
| { role: 'user', content: 'Hello Claude!' } | ||
| ] | ||
| }); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Reflect AI SDK v5 param rename (maxTokens → maxOutputTokens).
Show the v5 option in the example to prevent copy/paste issues post‑migration.
const result = await generateText({
model: client('sonnet'),
+ maxOutputTokens: 256,
messages: [
{ role: 'user', content: 'Hello Claude!' }
]
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const result = await generateText({ | |
| model: client('sonnet'), | |
| messages: [ | |
| { role: 'user', content: 'Hello Claude!' } | |
| ] | |
| }); | |
| const result = await generateText({ | |
| model: client('sonnet'), | |
| maxOutputTokens: 256, | |
| messages: [ | |
| { role: 'user', content: 'Hello Claude!' } | |
| ] | |
| }); |
🤖 Prompt for AI Agents
In docs/claude-code-integration.md around lines 207 to 213, the example uses the
old v4 param name and omits the new Reflect AI SDK v5 option; update the
generateText call to include the v5 parameter name maxOutputTokens (instead of
maxTokens) and show it with a representative numeric value so copy/paste users
will use the correct v5 option.
| ## Security Notes | ||
|
|
||
| - OAuth tokens are managed securely by Claude Code CLI | ||
| - No API keys need to be stored in your project files | ||
| - Tokens are automatically refreshed by the Claude Code CLI | ||
| - Environment variables should only be used in secure environments | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Reword security notes; API keys are optional but possible.
Avoid implying API keys are never used; clarify recommended token path and isolation advice.
-## Security Notes
-
-- OAuth tokens are managed securely by Claude Code CLI
-- No API keys need to be stored in your project files
-- Tokens are automatically refreshed by the Claude Code CLI
-- Environment variables should only be used in secure environments
+## Security Notes
+
+- Prefer OAuth tokens managed by the Claude Code CLI; they are stored/rotated by the CLI.
+- You generally don’t need API keys. If you set `CLAUDE_CODE_API_KEY`, it will be used and may incur API charges.
+- Unset `ANTHROPIC_API_KEY` in processes that should use Claude Code tokens to avoid accidental key pickup.
+- Use environment variables only in secure environments and rotate regularly.Based on learnings.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Security Notes | |
| - OAuth tokens are managed securely by Claude Code CLI | |
| - No API keys need to be stored in your project files | |
| - Tokens are automatically refreshed by the Claude Code CLI | |
| - Environment variables should only be used in secure environments | |
| ## Security Notes | |
| - Prefer OAuth tokens managed by the Claude Code CLI; they are stored/rotated by the CLI. | |
| - You generally don’t need API keys. If you set `CLAUDE_CODE_API_KEY`, it will be used and may incur API charges. | |
| - Unset `ANTHROPIC_API_KEY` in processes that should use Claude Code tokens to avoid accidental key pickup. | |
| - Use environment variables only in secure environments and rotate regularly. |
🤖 Prompt for AI Agents
In docs/claude-code-integration.md around lines 217 to 223, the Security Notes
section currently states API keys are not needed which is misleading; update the
wording to state that using the Claude Code CLI for OAuth tokens is the
recommended and secure option but API keys can also be used if necessary, and
add a short sentence advising storing any API keys or tokens in isolated, secure
environments (e.g., secrets manager or environment variables restricted to
CI/servers), plus recommend automatic token refresh and minimal scopes. Ensure
the revised bullets clearly separate the recommended path (CLI OAuth) from the
optional API key path and include isolation and least-privilege guidance.
| // Ensure new subtasks have unique sequential IDs after the preserved ones | ||
| const maxPreservedId = preservedSubtasks.reduce( | ||
| (max, st) => Math.max(max, st.id || 0), | ||
| 0 | ||
| ); | ||
| let nextId = maxPreservedId + 1; | ||
| const idMapping = new Map(); | ||
| const normalizedGeneratedSubtasks = processedGeneratedSubtasks | ||
| .map((st) => { | ||
| const originalId = st.id; | ||
| const newId = nextId++; | ||
| idMapping.set(originalId, newId); | ||
| return { | ||
| ...st, | ||
| id: newId | ||
| }; | ||
| }) | ||
| .map((st) => ({ | ||
| ...st, | ||
| dependencies: (st.dependencies || []).map((dep) => { | ||
| if (typeof dep !== 'string' || !dep.startsWith(`${task.id}.`)) { | ||
| return dep; | ||
| } | ||
| const [, siblingIdPart] = dep.split('.'); | ||
| const originalSiblingId = Number.parseInt(siblingIdPart, 10); | ||
| const remappedSiblingId = idMapping.get(originalSiblingId); | ||
| return remappedSiblingId ? `${task.id}.${remappedSiblingId}` : dep; | ||
| }) | ||
| })); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Remap dependencies in preserved subtasks too
You renumber generated subtasks and remap their sibling deps, but preserved subtasks that depend on regenerated siblings still point to old IDs. That silently corrupts the graph when a preserved subtask had a dep like ${task.id}.2 and “2” was regenerated/renumbered. Remap preserved deps using the same idMapping before concatenation.
Apply:
let nextId = maxPreservedId + 1;
const idMapping = new Map();
const normalizedGeneratedSubtasks = processedGeneratedSubtasks
.map((st) => {
const originalId = st.id;
const newId = nextId++;
idMapping.set(originalId, newId);
return {
...st,
id: newId
};
})
.map((st) => ({
...st,
dependencies: (st.dependencies || []).map((dep) => {
if (typeof dep !== 'string' || !dep.startsWith(`${task.id}.`)) {
return dep;
}
const [, siblingIdPart] = dep.split('.');
const originalSiblingId = Number.parseInt(siblingIdPart, 10);
const remappedSiblingId = idMapping.get(originalSiblingId);
return remappedSiblingId ? `${task.id}.${remappedSiblingId}` : dep;
})
}));
+
+ // Also remap preserved subtasks' dependencies that point to regenerated siblings
+ const remapSiblingDep = (dep) => {
+ if (typeof dep !== 'string' || !dep.startsWith(`${task.id}.`)) return dep;
+ const [, siblingIdPart] = dep.split('.');
+ const originalSiblingId = Number.parseInt(siblingIdPart, 10);
+ const remappedSiblingId = idMapping.get(originalSiblingId);
+ return remappedSiblingId ? `${task.id}.${remappedSiblingId}` : dep;
+ };
+ const remappedPreservedSubtasks = preservedSubtasks.map((st) => ({
+ ...st,
+ dependencies: (st.dependencies || []).map(remapSiblingDep)
+ }));And update the assignment below to use remappedPreservedSubtasks.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In scripts/modules/task-manager/scope-adjustment.js around lines 389 to 417,
preserved subtasks are not having their sibling dependency IDs remapped after
you renumber generated subtasks, causing preserved subtasks to still reference
old IDs; before concatenating preservedSubtasks with
normalizedGeneratedSubtasks, iterate preservedSubtasks and replace any
dependency strings that match `${task.id}.<num>` by looking up the numeric part
in idMapping and substituting the remapped id (leave non-matching deps
untouched), produce remappedPreservedSubtasks, and then change the later
assignment/concatenation to use remappedPreservedSubtasks instead of the
original preservedSubtasks.
8f79ce2 to
3c77c0a
Compare
3c77c0a to
716a491
Compare
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
packages/ai-sdk-provider-grok-cli/src/message-converter.ts (3)
65-69: Handle CRLF newlines in JSONL parsing.Current split('\n') drops lines with trailing '\r'; JSON.parse then fails. Normalize line endings.
- const lines = responseText - .trim() - .split('\n') - .filter((line) => line.trim()); + const lines = responseText + .trim() + .split(/\r?\n/) + .map((line) => line.trim()) + .filter(Boolean);
90-97: Derive totalTokens when total_tokens is missing.Prefer summing prompt+completion when total is absent instead of returning 0.
- usage: assistantMessage.usage - ? { - promptTokens: assistantMessage.usage.prompt_tokens || 0, - completionTokens: assistantMessage.usage.completion_tokens || 0, - totalTokens: assistantMessage.usage.total_tokens || 0 - } + usage: assistantMessage.usage + ? (() => { + const { + prompt_tokens = 0, + completion_tokens = 0, + total_tokens + } = assistantMessage.usage; + return { + promptTokens: prompt_tokens, + completionTokens: completion_tokens, + totalTokens: total_tokens ?? (prompt_tokens + completion_tokens) + }; + })() : undefined
30-41: Nit: avoid serializing large object content by default.Optional: cap JSON.stringify length to prevent massive prompts if non‑text parts leak in. Keep current behavior if size is acceptable.
// Example guard (pseudo) const raw = message.content.text ?? JSON.stringify(message.content); content = raw.length > 10_000 ? raw.slice(0, 10_000) + '…' : raw;scripts/modules/config-manager.js (1)
312-335: Replace invalid Zod calls and add parse test
- Change
z.int().positive()toz.number().int().positive()andz.url().optional()toz.string().url().optional()inBaseSettingsSchema.- Add a unit test for
validateClaudeCodeSettingsthat parses a representative settings object to lock down the schema and catch future Zod API regressions.packages/build-config/package.json (1)
18-21: Add standard test scripts to comply with guidelines.This package.json lacks the required test scripts. Add at least stubs for test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci. As per coding guidelines
"scripts": { - "build": "tsc", - "typecheck": "tsc --noEmit" + "build": "tsc", + "typecheck": "tsc --noEmit", + "test": "vitest run", + "test:watch": "vitest", + "test:coverage": "vitest run --coverage", + "test:unit": "vitest run -t unit", + "test:integration": "vitest run -t integration", + "test:e2e": "vitest run --dir tests/e2e", + "test:ci": "vitest run --coverage --reporter=dot" },apps/extension/package.json (1)
227-241: Add test scripts for the extension to meet repo standards.Consider adding basic VS Code extension test scripts to align with the required set. As per coding guidelines
"scripts": { "vscode:prepublish": "npm run build", "build": "npm run build:js && npm run build:css", @@ - "check-types": "tsc --noEmit" + "check-types": "tsc --noEmit", + "test": "node ./node_modules/@vscode/test-electron/out/runTest.js", + "test:ci": "node ./node_modules/@vscode/test-electron/out/runTest.js --coverage", + "test:watch": "node ./node_modules/@vscode/test-electron/out/runTest.js --watch", + "test:coverage": "node ./node_modules/@vscode/test-electron/out/runTest.js --coverage", + "test:unit": "node ./node_modules/@vscode/test-electron/out/runTest.js -g unit", + "test:integration": "node ./node_modules/@vscode/test-electron/out/runTest.js -g integration", + "test:e2e": "node ./node_modules/@vscode/test-electron/out/runTest.js -g e2e" },packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts (2)
135-149: Process termination handling: treat signal kills as failure and add SIGKILL fallback.Currently
exitCode || 0marks signal-terminated processes as success. Also, on timeout you only send SIGTERM. Add a SIGKILL fallback and propagate signal context.- let timeoutId: NodeJS.Timeout | undefined; + let timeoutId: NodeJS.Timeout | undefined; + let forceKillId: NodeJS.Timeout | undefined; @@ - if (timeout > 0) { + if (timeout > 0) { timeoutId = setTimeout(() => { child.kill('SIGTERM'); + // escalate if process doesn't exit promptly + forceKillId = setTimeout(() => { + try { child.kill('SIGKILL'); } catch {} + }, 2000); reject( createTimeoutError({ message: `Grok CLI command timed out after ${timeout}ms`, timeoutMs: timeout, promptExcerpt: args.join(' ').substring(0, 200) }) ); }, timeout); } @@ -child.on('exit', (exitCode) => { +child.on('exit', (exitCode, signal) => { if (timeoutId) clearTimeout(timeoutId); + if (forceKillId) clearTimeout(forceKillId); - - resolve({ - stdout: stdout.trim(), - stderr: stderr.trim(), - exitCode: exitCode || 0 - }); + resolve({ + stdout: stdout.trim(), + stderr: (stderr.trim() + (signal ? ` (terminated by ${signal})` : '')).trim(), + exitCode: typeof exitCode === 'number' ? exitCode : 1 + }); });Also applies to: 178-186
200-226: Add warning for unsupportedmaxOutputTokens(AI SDK v5).V5 standardizes on
maxOutputTokens. Since the CLI doesn’t support it, include it in unsupported settings to avoid silent ignores.- if (options.temperature !== undefined) unsupportedParams.push('temperature'); + if (options.temperature !== undefined) unsupportedParams.push('temperature'); + if (options.maxOutputTokens !== undefined) unsupportedParams.push('maxOutputTokens'); @@ - setting: param as + setting: param as | 'temperature' + | 'maxOutputTokens' | 'topP' | 'topK' | 'presencePenalty' | 'frequencyPenalty' | 'stopSequences' | 'seed',scripts/modules/task-manager/update-task-by-id.js (1)
36-42: Minor: Usez.int()for brevity/consistencyNo behavior change; aligns with other files using
z.int().- id: z.number().int(), + id: z.int(),scripts/modules/task-manager/scope-adjustment.js (2)
369-378: PrefergenerateTextService+ manual parse for reliabilityGiven provider variance with
generateObjectService, switch this call togenerateTextServiceand reuse the robust manual parsing + Zod validation pattern used elsewhere.
- Call
generateTextService({ role, session, systemPrompt, prompt, commandName, outputType }).- Extract JSON from
mainResult(brace/codeblock strategy), thenzod-validate withsubtaskSchema.Based on learnings
287-287: Optional: allow zero new subtasks when target ≤ preserved
Math.max(1, …)forces at least one new subtask even in heavy scope-down. Consider:-const newSubtasksNeeded = Math.max(1, targetSubtaskCount - preservedCount); +const newSubtasksNeeded = Math.max(0, targetSubtaskCount - preservedCount);
| "@ai-sdk/amazon-bedrock": "^3.0.23", | ||
| "@ai-sdk/anthropic": "^2.0.18", | ||
| "@ai-sdk/azure": "^2.0.34", | ||
| "@ai-sdk/google": "^2.0.16", | ||
| "@ai-sdk/google-vertex": "^3.0.29", | ||
| "@ai-sdk/groq": "^2.0.21", | ||
| "@ai-sdk/mistral": "^2.0.16", | ||
| "@ai-sdk/openai": "^2.0.34", | ||
| "@ai-sdk/perplexity": "^2.0.10", | ||
| "@ai-sdk/provider": "^2.0.0", | ||
| "@ai-sdk/provider-utils": "^3.0.10", | ||
| "@ai-sdk/xai": "^2.0.22", | ||
| "@aws-sdk/credential-providers": "^3.895.0", |
There was a problem hiding this comment.
🧩 Analysis chain
Verify AI SDK v5/provider version alignment and token param migration.
Ensure single versions across the workspace and no legacy maxTokens usage remains.
Also applies to: 69-75, 72-75, 99-103
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo "== Versions of ai and @ai-sdk/* across workspace =="
fd -t f -g "package.json" | xargs -I{} jq -r '
.name + " -> ai: " + ((.dependencies.ai // .devDependencies.ai // "—")) +
", @ai-sdk/provider: " + ((.dependencies["\u0040ai-sdk/provider"] // .devDependencies["\u0040ai-sdk/provider"] // "—")) +
", @ai-sdk/provider-utils: " + ((.dependencies["\u0040ai-sdk/provider-utils"] // .devDependencies["\u0040ai-sdk/provider-utils"] // "—"))
' {}
echo
echo "== Search for legacy token param =="
rg -nP --glob '!**/dist/**' '\bmaxTokens\b|["'\'']maxTokens["'\'']' || echo "No legacy maxTokens found"Length of output: 783
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo "== Listing @ai-sdk/* versions across workspace =="
fd -t f -g "package.json" -x jq -r '.name as $pkg | ([.dependencies, .devDependencies] | add | to_entries[] | select(.key|startswith("@ai-sdk/")) | "\($pkg) -> \(.key): \(.value)")' {}
echo
echo "== Search for legacy maxTokens usage =="
rg -n --glob '!dist/**' '\bmaxTokens\b' -t js -t ts -t jsx -t tsx || echo "No legacy maxTokens found"Length of output: 795
Align all @AI-SDK packages to v5
task-master-ai and @tm/ai-sdk-provider-grok-cli still use @ai-sdk/provider@^2.x and @ai-sdk/provider-utils@^3.x—bump both to ^5.0.0 to match ai@v5.0.51. No legacy maxTokens references remain.
🤖 Prompt for AI Agents
In package.json around lines 55 to 67, the @ai-sdk/provider and
@ai-sdk/provider-utils versions are out of sync with ai@v5; update both
dependency versions to "^5.0.0" (or the exact v5 range used across the repo),
run package manager install to update lockfile, and search the repo for any
remaining legacy maxTokens references to remove or rename them to the v5
equivalent; ensure package.json and lockfile are committed together.
| "scripts": { | ||
| "test": "vitest run", | ||
| "test:watch": "vitest", | ||
| "test:ui": "vitest --ui", | ||
| "typecheck": "tsc --noEmit" | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Add the full set of test scripts for consistency.
Round out the scripts to include coverage/unit/integration/e2e/ci. As per coding guidelines
"scripts": {
- "test": "vitest run",
- "test:watch": "vitest",
- "test:ui": "vitest --ui",
- "typecheck": "tsc --noEmit"
+ "test": "vitest run",
+ "test:watch": "vitest",
+ "test:coverage": "vitest run --coverage",
+ "test:unit": "vitest run -t unit",
+ "test:integration": "vitest run -t integration",
+ "test:e2e": "vitest run --dir tests/e2e",
+ "test:ci": "vitest run --coverage --reporter=dot",
+ "test:ui": "vitest --ui",
+ "typecheck": "tsc --noEmit"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "test": "vitest run", | |
| "test:watch": "vitest", | |
| "test:ui": "vitest --ui", | |
| "typecheck": "tsc --noEmit" | |
| }, | |
| "scripts": { | |
| "test": "vitest run", | |
| "test:watch": "vitest", | |
| "test:coverage": "vitest run --coverage", | |
| "test:unit": "vitest run -t unit", | |
| "test:integration": "vitest run -t integration", | |
| "test:e2e": "vitest run --dir tests/e2e", | |
| "test:ci": "vitest run --coverage --reporter=dot", | |
| "test:ui": "vitest --ui", | |
| "typecheck": "tsc --noEmit" | |
| }, |
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/package.json around lines 11 to 16, the
"scripts" section is missing the full standardized test script set
(coverage/unit/integration/e2e/ci); add scripts such as "test:unit" (vitest
run), "test:coverage" (vitest --coverage), "test:integration" (run integration
tests e.g. NODE_ENV=test vitest --config vitest.integration.config.ts),
"test:e2e" (run end-to-end tests e.g. vitest --config vitest.e2e.config.ts), and
"ci" (run typecheck then test:coverage), while keeping existing
test/test:watch/test:ui/typecheck entries and ensuring any new scripts follow
the repo naming and config conventions.
| "devDependencies": { | ||
| "@types/node": "^22.18.6", | ||
| "typescript": "^5.9.2", | ||
| "vitest": "^3.2.4" | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
Unify Vitest version across the monorepo to avoid dual installs.
This package uses Vitest ^3.x while apps/cli uses ^2.x. Prefer a single major.
🏁 Script executed:
#!/usr/bin/env bash
# List Vitest versions used across all package.json files
fd -t f -g "package.json" | xargs -I{} jq -r '.name + ": " + ((.devDependencies.vitest // .dependencies.vitest // "—"))' {}Length of output: 128
Unify Vitest major version across the monorepo
@tm/cli uses ^2.1.8 while @tm/ai-sdk-provider-grok-cli and @tm/core use ^3.2.4; update all packages to the same major (e.g. ^3.x) to avoid dual installs.
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/package.json around lines 22 to 26, the
devDependency "vitest" is pinned to "^3.2.4" creating a mismatch with other
packages that still use "^2.1.8" (or vice versa); update the vitest
devDependency across the monorepo to a single major (choose ^3.x), i.e., change
the vitest entry in this file (and any other package.json files using ^2.x) to
"^3.x", then run a full install to update the lockfile and run the test suite to
ensure compatibility.
| "files": ["dist/**/*", "README.md"], | ||
| "publishConfig": { | ||
| "access": "public" | ||
| } |
There was a problem hiding this comment.
Remove publishConfig or flip private: publishing flags conflict.
Package is marked private but sets publish access public. Drop publishConfig (or set private: false if you intend to publish).
"files": ["dist/**/*", "README.md"],
- "publishConfig": {
- "access": "public"
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "files": ["dist/**/*", "README.md"], | |
| "publishConfig": { | |
| "access": "public" | |
| } | |
| "files": ["dist/**/*", "README.md"], |
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/package.json around lines 31 to 34, the
package sets a publishConfig with "access": "public" while the package is marked
private, which conflicts; either remove the publishConfig entry entirely or
explicitly set "private": false in package.json if you intend to publish,
ensuring the publishConfig only exists when publishing is allowed.
| import { spawn } from 'child_process'; | ||
| import { promises as fs } from 'fs'; | ||
| import { homedir } from 'os'; | ||
| import { join } from 'path'; | ||
| import type { |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Import ReadableStream explicitly to avoid requiring DOM libs.
Relying on the global ReadableStream can force dom libs in TS configs. Import from Node’s web streams polyfill instead (using bare core import to match your preference). Based on learnings.
import { spawn } from 'child_process';
import { promises as fs } from 'fs';
import { homedir } from 'os';
import { join } from 'path';
+import { ReadableStream } from 'stream/web';Also applies to: 400-419
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/src/grok-cli-language-model.ts around lines
5-9 (and also apply to lines 400-419), the code relies on the global
ReadableStream which forces adding DOM libs in TypeScript; explicitly import
ReadableStream from Node's web streams polyfill instead. Update the top imports
to include `import { ReadableStream } from 'node:stream/web'` (or
`node:webstreams` equivalent) and replace any global usage with the imported
identifier so TypeScript no longer requires DOM types; apply the same change to
the usages/imports referenced at lines 400-419.
| /** | ||
| * Escape shell arguments for safe CLI execution | ||
| * @param {string} arg - Argument to escape | ||
| * @returns {string} Shell-escaped argument | ||
| * @param arg - Argument to escape | ||
| * @returns Shell-escaped argument | ||
| */ | ||
| export function escapeShellArg(arg) { | ||
| export function escapeShellArg(arg: string | unknown): string { | ||
| if (typeof arg !== 'string') { | ||
| arg = String(arg); | ||
| } | ||
|
|
||
| // Replace single quotes with '\'' | ||
| return "'" + arg.replace(/'/g, "'\\''") + "'"; | ||
| return "'" + (arg as string).replace(/'/g, "'\\''") + "'"; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
escapeShellArg is POSIX‑only; document behavior.
Suggest adding a JSDoc note that Windows cmd.exe/PowerShell need different quoting.
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/src/message-converter.ts around lines 141
to 153, the escapeShellArg function currently documents and implements
POSIX-style single-quote escaping only; add a JSDoc note above the function
explicitly stating that this escaping is POSIX-compliant and that Windows
cmd.exe and PowerShell require different quoting/escaping rules (with a brief
pointer to use platform-specific helpers or libraries on Windows). Keep the
implementation unchanged but update the JSDoc to mention the limitation and
recommend platform-specific alternatives.
| "declaration": true, | ||
| "declarationMap": true, | ||
| "sourceMap": true, | ||
| "outDir": "./dist", | ||
| "baseUrl": ".", | ||
| "rootDir": "./src", | ||
| "strict": true, |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Avoid duplicate JS emit: choose noEmit or emitDeclarationOnly to fit tsdown flow.
Right now tsc would emit JS into dist, overlapping with tsdown’s output. Pick one of:
- Option A (preferred if types are resolved from src as per repo practice): no TS emit at all.
- Option B (if you want .d.ts for editors/consumers): emit types only to a separate folder.
Based on learnings
Option A — rely on src types (no emit):
- "declaration": true,
- "declarationMap": true,
- "sourceMap": true,
- "outDir": "./dist",
+ "noEmit": true,Option B — emit only declarations (no JS), keep bundler JS separate:
- "declaration": true,
- "declarationMap": true,
- "sourceMap": true,
- "outDir": "./dist",
+ "declaration": true,
+ "declarationMap": true,
+ "emitDeclarationOnly": true,
+ "outDir": "./types",Please confirm which pattern the root build expects for internal packages so we can lock this in.
Also applies to: 24-25, 31-33
🏁 Script executed:
#!/bin/bash
# Display root tsconfig.json
if [ -f tsconfig.json ]; then
echo "--- root tsconfig.json ---"
sed -n '1,200p' tsconfig.json
else
echo "tsconfig.json not found"
fi
# Find tsdown config files
echo ""
echo "--- tsdown config files ---"
if command -v fd >/dev/null 2>&1; then
fd tsdown.config -t f
elif command -v find >/dev/null 2>&1; then
find . -maxdepth 2 -type f -name "tsdown.config*"
else
echo "No fd or find available"
fi
# Search for noExternal in tsdown config or build scripts
echo ""
echo "--- searching for noExternal ---"
rg -n "noExternal" -C2 || echo "noExternal not found"Length of output: 1476
🏁 Script executed:
#!/bin/bash
# Show top of per-package tsconfig
echo "--- packages/ai-sdk-provider-grok-cli/tsconfig.json lines 1-30 ---"
sed -n '1,30p' packages/ai-sdk-provider-grok-cli/tsconfig.json || echo "file not found"
# Check for 'extends' in the file
echo ""
echo "--- search for extends ---"
grep -n "extends" packages/ai-sdk-provider-grok-cli/tsconfig.json || echo "no extends"Length of output: 880
Avoid duplicate JS emit in packages/ai-sdk-provider-grok-cli/tsconfig.json
Remove the declaration, declarationMap, sourceMap and outDir options and set "noEmit": true (Option A) or, if you need only type declarations, replace with "emitDeclarationOnly": true plus "outDir": "types" (Option B).
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/tsconfig.json around lines 6 to 12, the
config currently sets "declaration", "declarationMap", "sourceMap" and "outDir"
which can cause duplicate JS emit; remove those four options and either set
"noEmit": true (Option A) or, if you only want type artifacts, replace them with
"emitDeclarationOnly": true and set "outDir": "types" (Option B); keep "rootDir"
and "strict" as-is and ensure the chosen option is the only emit-related setting
in the file.
| "strict": true, | ||
| "noImplicitAny": true, | ||
| "strictNullChecks": true, | ||
| "strictFunctionTypes": true, | ||
| "strictBindCallApply": true, | ||
| "strictPropertyInitialization": true, | ||
| "noImplicitThis": true, | ||
| "alwaysStrict": true, | ||
| "noUnusedLocals": true, | ||
| "noUnusedParameters": true, | ||
| "noImplicitReturns": true, | ||
| "noFallthroughCasesInSwitch": true, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tighten TypeScript safety for ESM + SDK code.
Add these strict flags; they catch real bugs and align with modern ESM patterns.
"strict": true,
+ "verbatimModuleSyntax": true,
+ "importsNotUsedAsValues": "error",
+ "exactOptionalPropertyTypes": true,
+ "noUncheckedIndexedAccess": true,
+ "useUnknownInCatchVariables": true,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "strict": true, | |
| "noImplicitAny": true, | |
| "strictNullChecks": true, | |
| "strictFunctionTypes": true, | |
| "strictBindCallApply": true, | |
| "strictPropertyInitialization": true, | |
| "noImplicitThis": true, | |
| "alwaysStrict": true, | |
| "noUnusedLocals": true, | |
| "noUnusedParameters": true, | |
| "noImplicitReturns": true, | |
| "noFallthroughCasesInSwitch": true, | |
| "strict": true, | |
| "verbatimModuleSyntax": true, | |
| "importsNotUsedAsValues": "error", | |
| "exactOptionalPropertyTypes": true, | |
| "noUncheckedIndexedAccess": true, | |
| "useUnknownInCatchVariables": true, | |
| "noImplicitAny": true, | |
| "strictNullChecks": true, | |
| "strictFunctionTypes": true, | |
| "strictBindCallApply": true, | |
| "strictPropertyInitialization": true, | |
| "noImplicitThis": true, | |
| "alwaysStrict": true, | |
| "noUnusedLocals": true, | |
| "noUnusedParameters": true, | |
| "noImplicitReturns": true, | |
| "noFallthroughCasesInSwitch": true, |
🤖 Prompt for AI Agents
In packages/ai-sdk-provider-grok-cli/tsconfig.json around lines 12 to 23, the
compilerOptions need the additional strict flags listed in the review; add
"strict": true, "noImplicitAny": true, "strictNullChecks": true,
"strictFunctionTypes": true, "strictBindCallApply": true,
"strictPropertyInitialization": true, "noImplicitThis": true, "alwaysStrict":
true, "noUnusedLocals": true, "noUnusedParameters": true, "noImplicitReturns":
true and "noFallthroughCasesInSwitch" to the compilerOptions block, then run the
TypeScript build/tsc to surface type errors and fix any resulting type issues
(add explicit types, handle nullable values, initialize properties, and remove
unused locals/params) so the project compiles under the tightened strict
settings.
| return { | ||
| updatedTask: task, | ||
| regenerated: true, | ||
| preserved: preservedSubtasks.length, | ||
| generated: processedGeneratedSubtasks.length | ||
| generated: normalizedGeneratedSubtasks.length | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Return telemetry from subtask regeneration; surface usage in callers
Expose aiResult.telemetryData so scopeUpTask/scopeDownTask can aggregate and display usage as per guidelines.
return {
updatedTask: task,
regenerated: true,
preserved: preservedSubtasks.length,
- generated: normalizedGeneratedSubtasks.length
+ generated: normalizedGeneratedSubtasks.length,
+ telemetryData: aiResult.telemetryData
};
@@
return {
updatedTask: task,
regenerated: false,
preserved: preservedSubtasks.length,
generated: 0,
- error: error.message
+ error: error.message,
+ telemetryData: null
};I can follow up with a small patch in scopeUpTask/scopeDownTask to aggregate this telemetry and call displayAiUsageSummary when outputFormat === 'text'. Based on learnings
Also applies to: 434-441
🤖 Prompt for AI Agents
In scripts/modules/task-manager/scope-adjustment.js around lines 422-427 (and
also apply the same change at lines 434-441), the returned object from subtask
regeneration does not include aiResult.telemetryData so callers
(scopeUpTask/scopeDownTask) cannot aggregate or display AI usage; modify the
return payload to include a telemetry (or telemetryData) field populated from
aiResult.telemetryData (ensure null/undefined-safe access) so callers can
collect and pass this through to displayAiUsageSummary when outputFormat ===
'text'.
| priority: z.string().nullable().prefault('medium'), | ||
| details: z.string().nullable().prefault(''), | ||
| testStrategy: z.string().nullable().prefault(''), | ||
| subtasks: z |
There was a problem hiding this comment.
🧩 Analysis chain
Blocker: .prefault() is not a Zod API; this will crash at load-time
Replace .prefault() with Zod-supported patterns and broaden subtask deps to accept dotted strings. Also treat null as missing via .nullish() so defaults apply.
- priority: z.string().nullable().prefault('medium'),
- details: z.string().nullable().prefault(''),
- testStrategy: z.string().nullable().prefault(''),
+ priority: z.string().nullish().default('medium'),
+ details: z.string().nullish().default(''),
+ testStrategy: z.string().nullish().default(''),
@@
- dependencies: z.array(z.number().int()).nullable().prefault([]),
- details: z.string().nullable().prefault(''),
- testStrategy: z.string().nullable().prefault('')
+ dependencies: z.array(z.union([z.int(), z.string()])).nullish().default([]),
+ details: z.string().nullish().default(''),
+ testStrategy: z.string().nullish().default('')
@@
- .nullable()
- .prefault([])
+ .nullish()
+ .default([])Run to confirm no custom prefault extensions exist:
Also applies to: 57-60, 63-65
🏁 Script executed:
#!/bin/bash
rg -nP '\.prefault\s*\(' -C2
rg -nP 'prefault' -g '!**/node_modules/**' -C2Length of output: 2091
Replace .prefault() with Zod’s .nullish().default() and broaden dependency types
.prefault is not part of the official Zod API and will crash at load-time; use .nullish().default(...) instead so defaults apply, and change subtasks’ dependencies to z.union([z.number().int(), z.string()]) to accept dotted strings:
- priority: z.string().nullable().prefault('medium'),
- details: z.string().nullable().prefault(''),
- testStrategy: z.string().nullable().prefault(''),
+ priority: z.string().nullish().default('medium'),
+ details: z.string().nullish().default(''),
+ testStrategy: z.string().nullish().default(''),
@@
- dependencies: z.array(z.number().int()).nullable().prefault([]),
- details: z.string().nullable().prefault(''),
- testStrategy: z.string().nullable().prefault('')
+ dependencies: z.array(z.union([z.number().int(), z.string()])).nullish().default([]),
+ details: z.string().nullish().default(''),
+ testStrategy: z.string().nullish().default('')
@@
- .nullable()
- .prefault([])
+ .nullish()
+ .default([])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| priority: z.string().nullable().prefault('medium'), | |
| details: z.string().nullable().prefault(''), | |
| testStrategy: z.string().nullable().prefault(''), | |
| subtasks: z | |
| // … other fields … | |
| priority: z.string().nullish().default('medium'), | |
| details: z.string().nullish().default(''), | |
| testStrategy: z.string().nullish().default(''), | |
| subtasks: z.array( | |
| z.object({ | |
| id: z.number().int(), | |
| title: z.string(), | |
| dependencies: z.array(z.union([z.number().int(), z.string()])).nullish().default([]), | |
| details: z.string().nullish().default(''), | |
| testStrategy: z.string().nullish().default('') | |
| }) | |
| ).nullish().default([]) | |
| // … following fields … |
🤖 Prompt for AI Agents
In scripts/modules/task-manager/update-task-by-id.js around lines 43 to 46, the
schema uses the non-existent .prefault() method and has too-narrow types for
subtasks; replace each .nullable().prefault(...) with Zod’s supported pattern
.nullish().default(...) (preserving the same default values like 'medium' and
'') and update the subtasks dependency type to accept both numeric IDs and
dotted-string IDs by using z.union([z.number().int(), z.string()]) where
appropriate so the schema loads without runtime errors and accepts dotted-string
dependencies.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/extension/package.json (1)
10-11: Mismatch: engines.vscode (1.93) vs @types/vscode (1.101).You’re compiling against APIs newer than your declared minimum VS Code version; this can cause runtime failures on 1.93. Align them.
Apply one of:
- Prefer lowering types to match engine:
- "@types/vscode": "^1.101.0", + "@types/vscode": "^1.93.0",
- Or, if you truly depend on newer APIs, raise the engine:
- "vscode": "^1.93.0" + "vscode": "^1.101.0"Also applies to: 262-262
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (6)
.github/workflows/extension-ci.yml(1 hunks).github/workflows/extension-release.yml(1 hunks)apps/extension/package.json(2 hunks)apps/extension/package.publish.json(1 hunks)apps/extension/tsconfig.json(1 hunks)tests/integration/cli/complex-cross-tag-scenarios.test.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/integration/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Properly mock session objects when required by functions, and reset environment variables between tests if modified.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
🧠 Learnings (24)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
📚 Learning: 2025-09-26T19:10:32.889Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: tsconfig.json:22-28
Timestamp: 2025-09-26T19:10:32.889Z
Learning: In the eyaltoledano/claude-task-master repository, all internal tm/ package path mappings in tsconfig.json consistently point to TypeScript source files (e.g., "./packages/*/src/index.ts") rather than built JavaScript. This is intentional architecture because tsdown bundles internal packages directly from source during build time, eliminating the need for separate compilation of internal packages.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-09-26T19:05:47.526Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: packages/ai-sdk-provider-grok-cli/package.json:11-13
Timestamp: 2025-09-26T19:05:47.526Z
Learning: In the eyaltoledano/claude-task-master repository, internal tm/ packages use a specific export pattern where the "exports" field points to TypeScript source files (./src/index.ts) while "main" points to compiled output (./dist/index.js) and "types" points to source files (./src/index.ts). This pattern is used consistently across internal packages like tm/core and tm/ai-sdk-provider-grok-cli because they are consumed directly during build-time bundling with tsdown rather than being published as separate packages.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-09-26T19:07:10.462Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: packages/ai-sdk-provider-grok-cli/package.json:21-35
Timestamp: 2025-09-26T19:07:10.462Z
Learning: In the eyaltoledano/claude-task-master repository, the tsdown build configuration uses `noExternal: [/^tm\//]` which means internal tm/ packages are bundled into the final output while external npm dependencies remain external and are resolved from the root package.json dependencies at runtime. This eliminates the need for peer dependencies in internal packages since the root package.json already provides the required external dependencies.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-09-17T19:09:08.882Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1211
File: jest.resolver.cjs:8-15
Timestamp: 2025-09-17T19:09:08.882Z
Learning: In the eyaltoledano/claude-task-master project, the team only uses .ts files and does not plan to use .tsx or .mts extensions, so Jest resolver and build tooling should focus on .js → .ts mapping only.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-09-26T19:03:33.202Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: package.json:130-132
Timestamp: 2025-09-26T19:03:33.202Z
Learning: In the eyaltoledano/claude-task-master repository, packages are bundled using tsdown during the build process, which means dependencies imported by the source code (including tm internal packages like tm/ai-sdk-provider-grok-cli) are included in the final bundle and don't need to be available as separate runtime dependencies, so they should remain as devDependencies rather than being moved to dependencies.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test CLI and MCP interfaces with real task data, verify end-to-end workflows across tag contexts, and test error scenarios and recovery in integration tests.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Verify modifications on the in-memory task objects passed to writeJSON.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/*.js : Minimize file I/O operations per feature execution, cache tag resolution results when appropriate, use streaming for large task datasets, and avoid loading all tags when only one is needed.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/*.js : Process large task lists efficiently, clean up temporary data structures, and avoid keeping all tag data in memory simultaneously.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test core logic independently with both data formats, mock file system operations, test tag resolution behavior, and verify migration compatibility in unit tests.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/*.js : Ensure new features work with existing projects seamlessly, supporting both legacy and tagged task data formats, and support silent migration during feature usage.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use consistent formatting for task files, include all task properties in text files, and format dependencies with status indicators.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Generate task files from the current tag context, include tag information in generated files, and do not mix tasks from different tags in file generation.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/utils.js : Implement complete migration for all related files in the tagged task system, handle configuration and state file creation, and provide migration status tracking.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-09-22T19:45:04.337Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1232
File: packages/tm-core/package.json:50-51
Timestamp: 2025-09-22T19:45:04.337Z
Learning: In the eyaltoledano/claude-task-master project, Crunchyman-ralph intentionally omits version fields from internal/private packages in package.json files to prevent changesets from releasing new versions of these packages while still allowing them to be processed for dependency updates. The changesets warnings about missing versions are acceptable as they don't break the process and achieve the desired behavior of only releasing public packages.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-03T12:16:15.866Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/package.json:13-64
Timestamp: 2025-09-03T12:16:15.866Z
Learning: For internal packages in the claude-task-master project, Crunchyman-ralph prefers pointing package.json "types" entries to src .ts files rather than dist .d.ts files for better developer experience (DX), as the packages are not being exported as SDKs.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-08-07T13:00:22.966Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1090
File: apps/extension/package.json:241-243
Timestamp: 2025-08-07T13:00:22.966Z
Learning: In monorepos, local packages should use "*" as the version constraint in package.json dependencies, as recommended by npm. This ensures the local version from within the same workspace is always used, rather than attempting to resolve from external registries. This applies to packages like task-master-ai within the eyaltoledano/claude-task-master monorepo.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-22T19:45:13.323Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1232
File: packages/build-config/package.json:14-15
Timestamp: 2025-09-22T19:45:13.323Z
Learning: In the eyaltoledano/claude-task-master repository, Crunchyman-ralph intentionally omits version fields from internal packages (like tm/build-config) to prevent changesets from releasing new versions for these packages. This is the desired behavior for internal tooling packages that should not be published or versioned independently.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-24T15:12:12.658Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-09-24T15:12:12.658Z
Learning: Use task-master generate to create or update task files from tasks.json and keep dependencies valid with fix-dependencies
Applied to files:
apps/extension/package.json
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-24T15:12:12.658Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-09-24T15:12:12.658Z
Learning: Manage dependencies with task-master add-dependency and remove-dependency; validate and fix using validate-dependencies and fix-dependencies
Applied to files:
apps/extension/package.json
📚 Learning: 2025-07-31T20:49:04.638Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#997
File: apps/extension/package.publish.json:2-8
Timestamp: 2025-07-31T20:49:04.638Z
Learning: In the eyaltoledano/claude-task-master repository, the VS Code extension uses a 3-file packaging system where package.json (with name "extension") is for development within the monorepo, while package.publish.json (with name "task-master-hamster") contains the clean manifest for VS Code marketplace publishing. The different names are intentional and serve distinct purposes in the build and publishing workflow.
Applied to files:
apps/extension/package.publish.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test
- GitHub Check: build
- GitHub Check: typecheck
🔇 Additional comments (7)
apps/extension/package.publish.json (1)
5-5: Version bump looks good.The publish manifest reflects the new 0.25.3 release without any other changes, which matches the extension’s packaging flow for marketplace uploads.
.github/workflows/extension-release.yml (1)
34-36: Rename clarifies monorepo intentRenaming the install step makes the scope explicit and keeps it aligned with the broader monorepo tooling changes.
.github/workflows/extension-ci.yml (1)
44-46: Root-level install keeps CI alignedSwitching to the monorepo-wide install step keeps the CI setup consistent with the release workflow and the workspace dependency layout.
apps/extension/package.json (4)
243-244: Workspace wildcard for local package looks good.Using "*" for the in-repo task-master-ai aligns with the monorepo’s workspace practice. Please just confirm esbuild/vsce packaging won’t ship an unnecessary node_modules copy of it (i.e., it’s fully bundled).
Based on learnings.
279-279: TypeScript 5.9 upgrade — verify toolchain compat.Ensure tsc --noEmit, eslint/tsconfig, and esbuild type paths compile cleanly under 5.9 (no new TS lib/ES target issues).
280-280: @tm/core as devDependency — OK if fully bundled.If any extension runtime code requires @tm/core via dynamic import or non-bundled path, it must move to dependencies; otherwise keep in devDeps to avoid bloating the VSIX.
Based on learnings.
10-11: Node types vs VS Code runtime Node/Electron.@types/node@22 may not match the Node version shipped with VS Code 1.93. Consider pinning @types/node to the Node version for that VS Code/Electron to avoid type/runtime drift.
Also applies to: 259-259
| "@tm/core": ["../../packages/tm-core/src/index.ts"], | ||
| "@tm/core/*": ["../../packages/tm-core/src/*"] |
There was a problem hiding this comment.
🧩 Analysis chain
Align alias with monorepo convention and ensure bundler resolution
- Convention: repo imports usually use the
tm/*prefix mapped to TS sources. Adding only@tm/coreintroduces a second style and may cause churn. Addtm/corealiases (or switch entirely) to keep consistency and ease migration. Based on learnings. - Build: tsc won’t rewrite path aliases; ensure your bundler (tsdown/vite/webpack) resolves tsconfig paths at build time, or these imports will fail at runtime.
Suggested additive change (keeps your new aliases and restores tm/* style for compatibility):
"paths": {
"@/components/*": ["./src/components/*"],
"@/lib/*": ["./src/lib/*"],
- "@tm/core": ["../../packages/tm-core/src/index.ts"],
- "@tm/core/*": ["../../packages/tm-core/src/*"]
+ "@tm/core": ["../../packages/tm-core/src/index.ts"],
+ "@tm/core/*": ["../../packages/tm-core/src/*"],
+ "tm/core": ["../../packages/tm-core/src/index.ts"],
+ "tm/core/*": ["../../packages/tm-core/src/*"]
}Optionally, if the repo standardizes on a single wildcard, consider a broader mapping to avoid per‑package entries:
- "tm/": ["../../packages//src/index.ts"] (confirm no conflicts first).
Run to verify current usage and bundler config:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Import usage =="
rg -nP --type=ts --type=tsx -C1 $'from\\s+[\'"](@?tm/core)(?:/[^\'"]*)?[\'"]|import\\s+[^;]*\\s+from\\s+[\'"](@?tm/core)(?:/[^\'"]*)?[\'"]' || true
echo -e "\n== Other tm/* style imports (for convention check) =="
rg -nP --type=ts --type=tsx -C1 $'from\\s+[\'"]tm/[^\'"]+[\'"]|import\\s+[^;]*\\s+from\\s+[\'"]tm/[^\'"]+[\'"]' || true
echo -e "\n== Bundler alias support (vite/webpack/tsdown/rollup/esbuild) =="
fd -a 'vite.config.*|webpack.config.*|rollup.config.*|tsdown.*|esbuild.*|tsup.config.*' -HI || true
rg -n 'alias|tsconfigPaths' $(fd 'vite.config.*|webpack.config.*|rollup.config.*|tsdown.*|esbuild.*|tsup.config.*' -HI) 2>/dev/null || trueLength of output: 831
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== alias mappings block =="
sed -n '/alias: *{/,/}/p' apps/extension/esbuild.jsLength of output: 328
Configure esbuild to resolve your new path aliases and align with monorepo conventions
- Replace or supplement
@tm/corewith the establishedtm/coreprefix in your tsconfig paths (or vice-versa if you intend to adopt@tm/*across the repo). - In apps/extension/esbuild.js, extend the
aliasblock to cover these mappings, for example:alias: { …, 'tm/core': path.resolve(__dirname, '../../packages/tm-core/src/index.ts'), 'tm/core/*': path.resolve(__dirname, '../../packages/tm-core/src'), '@tm/core': path.resolve(__dirname, '../../packages/tm-core/src/index.ts'), '@tm/core/*': path.resolve(__dirname, '../../packages/tm-core/src'), },
- Alternatively, install and configure a tsconfig-paths plugin for esbuild so it mirrors your tsconfig
pathsautomatically.
Without this, imports like import { foo } from '@tm/core' (or tm/core) will fail at runtime.
🤖 Prompt for AI Agents
In apps/extension/tsconfig.json around lines 24 to 25, the tsconfig path aliases
declare "@tm/core" and "@tm/core/*" but esbuild is not configured to resolve
these (or the repo convention prefers "tm/core"), causing runtime import
failures; update either the tsconfig to use the repository's established prefix
(e.g., "tm/core") or add the complementary aliases, and then modify
apps/extension/esbuild.js to extend the alias block to map both "tm/core" and
"@tm/core" (and their /* variants) to ../../packages/tm-core/src (index.ts for
the root) or install/configure a tsconfig-paths plugin for esbuild so esbuild
mirrors tsconfig paths automatically.
| fs.writeFileSync(tasksPath, JSON.stringify(largeTaskSet, null, 2)); | ||
| // Should complete within reasonable time | ||
| const timeout = process.env.CI ? 11000 : 6000; | ||
| const timeout = process.env.CI ? 12000 : 8000; |
There was a problem hiding this comment.
Avoid masking performance regressions by widening the performance guardrail
Bumping the large-task timeout from 11 s→12 s (CI) and 6 s→8 s locally relaxes the test by 33 %/21 %. This performance check exists to catch regressions in the move flow; loosening it without fixing the underlying slowdown (or showing new profiling data) turns the test into a much weaker signal. Please either address the latency increase or keep the original limits and mitigate flakiness another way (e.g., warm up the binary before timing). Based on learnings.
🤖 Prompt for AI Agents
In tests/integration/cli/complex-cross-tag-scenarios.test.js around line 375,
the large-task timeout was widened (CI 11s→12s, local 6s→8s) which weakens the
performance guardrail; revert the timeout to the original values (CI 11s, local
6s) OR instead keep the relaxed limits only if you add a deterministic
mitigation: run a short warm-up invocation of the binary before measuring to
eliminate cold-start variance, and/or add profiling logs that justify the higher
threshold; implement one of these fixes and keep the timeout change out of the
main test without profiling-backed justification.
26b6fb4 to
bf85808
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
scripts/modules/config-manager.js (3)
777-787: Add early exit in getMcpApiKeyStatus for providers without API keys (GROK_CLI, GEMINI_CLI, MCP).Per guidelines, return true immediately for providers not requiring keys to avoid false “missing key” signals and unnecessary file I/O.
function getMcpApiKeyStatus(providerName, projectRoot = null) { + const providerKey = providerName?.toLowerCase(); + // Providers that don't require API keys (align with isApiKeySet/provide list) + if (providersWithoutApiKeys.includes(providerKey) || providerKey === 'claude-code') { + return true; + } const rootDir = projectRoot || findProjectRoot(); // Use existing root finding
994-999: Log level typo: use 'warn' (not 'warning').The utils logger elsewhere uses 'warn'; 'warning' may be ignored.
- log( - 'warning', + log( + 'warn', 'Failed to write updated configuration with new userId. Please let the developers know.' );
1019-1021: Normalize env var name for hyphenated providers.Hyphens produce invalid env var names (e.g., GROK-CLI_BASE_URL). Replace non-alphanumerics with underscores.
- const envVarName = `${provider.toUpperCase()}_BASE_URL`; + const envVarName = `${provider.toUpperCase().replace(/[^A-Z0-9_]/g, '_')}_BASE_URL`;.github/workflows/extension-release.yml (1)
107-110: Notification claims a GitHub release that isn’t createdThere’s no step that creates a GitHub Release, yet the message says “GitHub release created”. Either add a release step or adjust the message.
Minimal fix:
- echo "🏷️ GitHub release created: ${{ github.ref_name }}" + echo "🏷️ Tag pushed: ${{ github.ref_name }}"Or add a release step (e.g., softprops/action-gh-release) before this job.
.github/workflows/extension-ci.yml (2)
69-71: Run install only on cache miss and improve cache strategyThis step is labeled “Install if cache miss” but always runs. Add a condition and consider node-version in cache key or use setup-node caching.
Example:
- - name: Install if cache miss - run: npm ci - timeout-minutes: 3 + - name: Install if cache miss + if: steps.cache.outputs.cache-hit != 'true' + run: npm ci --workspaces --include-workspace-root --no-audit --no-fund + timeout-minutes: 8Also consider switching to:
- uses: actions/setup-node@v4 with: node-version: 20 + cache: npm + cache-dependency-path: '**/package-lock.json'to avoid caching node_modules, which npm ci deletes anyway.
100-102: Same as typecheck job: gate install on cache miss and extend timeoutMirror the condition and flags here to avoid redundant installs and flakiness on cache cold starts.
- - name: Install if cache miss - run: npm ci - timeout-minutes: 3 + - name: Install if cache miss + if: steps.cache.outputs.cache-hit != 'true' + run: npm ci --workspaces --include-workspace-root --no-audit --no-fund + timeout-minutes: 8
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (7)
.github/workflows/extension-ci.yml(1 hunks).github/workflows/extension-release.yml(1 hunks)apps/extension/package.json(2 hunks)apps/extension/package.publish.json(1 hunks)apps/extension/tsconfig.json(1 hunks)scripts/modules/config-manager.js(2 hunks)tests/integration/cli/complex-cross-tag-scenarios.test.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (13)
scripts/modules/config-manager.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
scripts/modules/config-manager.js: Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.
scripts/modules/config-manager.js: Import and use specific getters fromscripts/modules/config-manager.jsto access configuration values needed for application logic; pass theexplicitRootparameter to getters if calling from MCP direct functions.
UseisApiKeySet(providerName, session)fromconfig-manager.jsto check if a provider's key is available before attempting an AI call.
Handle potentialConfigurationErrorif the.taskmasterconfigfile is missing or invalid when accessed viagetConfig.
Files:
scripts/modules/config-manager.js
scripts/modules/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).
scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...
Files:
scripts/modules/config-manager.js
scripts/modules/**
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
When using the MCP server, restart it if core logic in
scripts/modulesor MCP tool/direct function definitions change.
Files:
scripts/modules/config-manager.js
scripts/modules/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag
Files:
scripts/modules/config-manager.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
scripts/modules/config-manager.jstests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/integration/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Integration tests must be located in tests/integration/, test interactions between modules, and focus on component interfaces rather than implementation details.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
Properly mock session objects when required by functions, and reset environment variables between tests if modified.
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
🧠 Learnings (22)
📓 Common learnings
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: package.json:130-132
Timestamp: 2025-09-26T19:03:33.202Z
Learning: In the eyaltoledano/claude-task-master repository, packages are bundled using tsdown during the build process, which means dependencies imported by the source code (including tm internal packages like tm/ai-sdk-provider-grok-cli) are included in the final bundle and don't need to be available as separate runtime dependencies, so they should remain as devDependencies rather than being moved to dependencies.
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
📚 Learning: 2025-09-26T19:10:32.889Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: tsconfig.json:22-28
Timestamp: 2025-09-26T19:10:32.889Z
Learning: In the eyaltoledano/claude-task-master repository, all internal tm/ package path mappings in tsconfig.json consistently point to TypeScript source files (e.g., "./packages/*/src/index.ts") rather than built JavaScript. This is intentional architecture because tsdown bundles internal packages directly from source during build time, eliminating the need for separate compilation of internal packages.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-09-26T19:05:47.526Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: packages/ai-sdk-provider-grok-cli/package.json:11-13
Timestamp: 2025-09-26T19:05:47.526Z
Learning: In the eyaltoledano/claude-task-master repository, internal tm/ packages use a specific export pattern where the "exports" field points to TypeScript source files (./src/index.ts) while "main" points to compiled output (./dist/index.js) and "types" points to source files (./src/index.ts). This pattern is used consistently across internal packages like tm/core and tm/ai-sdk-provider-grok-cli because they are consumed directly during build-time bundling with tsdown rather than being published as separate packages.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-09-26T19:07:10.462Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: packages/ai-sdk-provider-grok-cli/package.json:21-35
Timestamp: 2025-09-26T19:07:10.462Z
Learning: In the eyaltoledano/claude-task-master repository, the tsdown build configuration uses `noExternal: [/^tm\//]` which means internal tm/ packages are bundled into the final output while external npm dependencies remain external and are resolved from the root package.json dependencies at runtime. This eliminates the need for peer dependencies in internal packages since the root package.json already provides the required external dependencies.
Applied to files:
apps/extension/tsconfig.jsonapps/extension/package.json
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/**/*.test.js : Test CLI and MCP interfaces with real task data, verify end-to-end workflows across tag contexts, and test error scenarios and recovery in integration tests.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Verify modifications on the in-memory task objects passed to writeJSON.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to tests/{unit,integration,e2e}/**/*.test.js : Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/*.js : Minimize file I/O operations per feature execution, cache tag resolution results when appropriate, use streaming for large task datasets, and avoid loading all tags when only one is needed.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/*.js : Process large task lists efficiently, clean up temporary data structures, and avoid keeping all tag data in memory simultaneously.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Generate task files from the current tag context, include tag information in generated files, and do not mix tasks from different tags in file generation.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/utils.js : Implement complete migration for all related files in the tagged task system, handle configuration and state file creation, and provide migration status tracking.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-18T17:14:29.399Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tasks.mdc:0-0
Timestamp: 2025-07-18T17:14:29.399Z
Learning: Applies to scripts/modules/task-manager.js : Use consistent formatting for task files, include all task properties in text files, and format dependencies with status indicators.
Applied to files:
tests/integration/cli/complex-cross-tag-scenarios.test.js
📚 Learning: 2025-07-31T20:49:04.638Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#997
File: apps/extension/package.publish.json:2-8
Timestamp: 2025-07-31T20:49:04.638Z
Learning: In the eyaltoledano/claude-task-master repository, the VS Code extension uses a 3-file packaging system where package.json (with name "extension") is for development within the monorepo, while package.publish.json (with name "task-master-hamster") contains the clean manifest for VS Code marketplace publishing. The different names are intentional and serve distinct purposes in the build and publishing workflow.
Applied to files:
apps/extension/package.publish.json
📚 Learning: 2025-09-26T19:03:33.202Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1252
File: package.json:130-132
Timestamp: 2025-09-26T19:03:33.202Z
Learning: In the eyaltoledano/claude-task-master repository, packages are bundled using tsdown during the build process, which means dependencies imported by the source code (including tm internal packages like tm/ai-sdk-provider-grok-cli) are included in the final bundle and don't need to be available as separate runtime dependencies, so they should remain as devDependencies rather than being moved to dependencies.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-22T19:45:04.337Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1232
File: packages/tm-core/package.json:50-51
Timestamp: 2025-09-22T19:45:04.337Z
Learning: In the eyaltoledano/claude-task-master project, Crunchyman-ralph intentionally omits version fields from internal/private packages in package.json files to prevent changesets from releasing new versions of these packages while still allowing them to be processed for dependency updates. The changesets warnings about missing versions are acceptable as they don't break the process and achieve the desired behavior of only releasing public packages.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-22T19:45:13.323Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1232
File: packages/build-config/package.json:14-15
Timestamp: 2025-09-22T19:45:13.323Z
Learning: In the eyaltoledano/claude-task-master repository, Crunchyman-ralph intentionally omits version fields from internal packages (like tm/build-config) to prevent changesets from releasing new versions for these packages. This is the desired behavior for internal tooling packages that should not be published or versioned independently.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-03T12:16:15.866Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/package.json:13-64
Timestamp: 2025-09-03T12:16:15.866Z
Learning: For internal packages in the claude-task-master project, Crunchyman-ralph prefers pointing package.json "types" entries to src .ts files rather than dist .d.ts files for better developer experience (DX), as the packages are not being exported as SDKs.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-08-07T13:00:22.966Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1090
File: apps/extension/package.json:241-243
Timestamp: 2025-08-07T13:00:22.966Z
Learning: In monorepos, local packages should use "*" as the version constraint in package.json dependencies, as recommended by npm. This ensures the local version from within the same workspace is always used, rather than attempting to resolve from external registries. This applies to packages like task-master-ai within the eyaltoledano/claude-task-master monorepo.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-24T15:12:12.658Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-09-24T15:12:12.658Z
Learning: Use task-master generate to create or update task files from tasks.json and keep dependencies valid with fix-dependencies
Applied to files:
apps/extension/package.json
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-24T15:12:12.658Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-09-24T15:12:12.658Z
Learning: Manage dependencies with task-master add-dependency and remove-dependency; validate and fix using validate-dependencies and fix-dependencies
Applied to files:
apps/extension/package.json
📚 Learning: 2025-09-17T19:09:08.882Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1211
File: jest.resolver.cjs:8-15
Timestamp: 2025-09-17T19:09:08.882Z
Learning: In the eyaltoledano/claude-task-master project, the team only uses .ts files and does not plan to use .tsx or .mts extensions, so Jest resolver and build tooling should focus on .js → .ts mapping only.
Applied to files:
apps/extension/package.json
🧬 Code graph analysis (1)
scripts/modules/config-manager.js (1)
src/constants/commands.js (2)
AI_COMMAND_NAMES(7-17)AI_COMMAND_NAMES(7-17)
🔇 Additional comments (8)
apps/extension/tsconfig.json (2)
27-27: Include scope looks good; confirm toolchain expectationsLimiting to "src/**/*" is fine and implicitly sets root to src. Please confirm no build/test files outside src need type-checking, and that generated dirs remain excluded.
23-24: Align path alias with repo convention and ensure bundler resolves itOnly @tm/core aliases were added. The monorepo convention uses tm/* mapped to TS sources; keep both to avoid churn and resolution issues. Also ensure esbuild resolves these aliases (alias entries or a tsconfig-paths plugin), otherwise runtime imports can fail. Based on learnings.
Apply this additive diff to tsconfig paths:
"paths": { "@/*": ["./src/*"], "@/components/*": ["./src/components/*"], "@/lib/*": ["./src/lib/*"], "@tm/core": ["../../packages/tm-core/src/index.ts"], - "@tm/core/*": ["../../packages/tm-core/src/*"] + "@tm/core/*": ["../../packages/tm-core/src/*"], + "tm/core": ["../../packages/tm-core/src/index.ts"], + "tm/core/*": ["../../packages/tm-core/src/*"] }If you intend to standardize broadly, consider a single wildcard (verify no conflicts first):
- "tm/": ["../../packages//src/index.ts"].
tests/integration/cli/complex-cross-tag-scenarios.test.js (1)
373-383: Revert widened timeout; add deterministic warm-up to keep the guardrail strongBumping 11s→12s (CI) and 6s→8s (local) weakens the signal. Warm up the binary, restore the dataset, then measure and keep the original limits.
- fs.writeFileSync(tasksPath, JSON.stringify(largeTaskSet, null, 2)); - // Should complete within reasonable time - const timeout = process.env.CI ? 12000 : 8000; - const startTime = Date.now(); - execSync( - `node ${binPath} move --from=50 --from-tag=master --to-tag=in-progress --with-dependencies`, - { stdio: 'pipe' } - ); - const endTime = Date.now(); + fs.writeFileSync(tasksPath, JSON.stringify(largeTaskSet, null, 2)); + // Warm up binary to remove cold-start variance, then restore dataset + const cmd = `node ${binPath} move --from=50 --from-tag=master --to-tag=in-progress --with-dependencies`; + execSync(cmd, { stdio: 'pipe' }); // warm-up + fs.writeFileSync(tasksPath, JSON.stringify(largeTaskSet, null, 2)); // restore baseline + // Should complete within original guardrail + const timeout = process.env.CI ? 11000 : 6000; + const startTime = Date.now(); + execSync(cmd, { stdio: 'pipe' }); + const endTime = Date.now();scripts/modules/config-manager.js (2)
312-316: Zod int constraint fix looks good.Using z.number().int().positive().optional() is correct for Zod v4.
330-333: Fix: z.url() is not a Zod v4 API — this will throw at runtime.Use z.string().url() to validate URLs. This was flagged earlier and still persists.
- env: z.record(z.string(), z.string()).optional(), - url: z.url().optional(), - headers: z.record(z.string(), z.string()).optional() + env: z.record(z.string(), z.string()).optional(), + url: z.string().url().optional(), + headers: z.record(z.string(), z.string()).optional()apps/extension/package.json (2)
243-244: Approve using "*" to reference the task-master-ai workspace packageThe root package.json defines "task-master-ai" and the workspace config includes ".", so
"*"will resolve to the local package version rather than pulling from the registry.
279-281: TS 5.9 bump looks good; @tm/core is only a build-time dependency, no runtime imports detected..github/workflows/extension-ci.yml (1)
44-46: .github/workflows/extension-ci.yml: Harden CI install with workspace flags and extend timeout- - name: Install Monorepo Dependencies - run: npm ci - timeout-minutes: 5 + - name: Install Monorepo Dependencies + run: npm ci --workspaces --include-workspace-root --no-audit --no-fund + timeout-minutes: 8Optional perf: in typecheck/build jobs, install only the extension workspace:
- run: npm ci --workspace=apps/extension --no-audit --no-fund
| - name: Install Monorepo Dependencies | ||
| run: npm ci | ||
| timeout-minutes: 5 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
Root-level npm ci: verify workspaces + optimize caching/timeouts
- Ensure the repo uses npm workspaces with a root package-lock that includes apps/extension, otherwise root npm ci will miss extension deps.
- actions/cache on node_modules is ineffective with npm ci (it wipes node_modules). Prefer setup-node’s built‑in npm cache or cache ~/.npm.
- Consider adding fast, safe flags to npm ci.
Suggested tweak for this step:
- - name: Install Monorepo Dependencies
- run: npm ci
+ - name: Install Monorepo Dependencies
+ run: npm ci --workspaces --include-workspace-root --no-audit --no-fund
timeout-minutes: 5Optional: switch to setup-node cache and drop the explicit actions/cache step:
- uses: actions/setup-node@v4
with:
node-version: 20
+ cache: npm
+ cache-dependency-path: '**/package-lock.json'Run to confirm workspace wiring before merging:
Based on learnings.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Root workspaces:"
jq -r '.workspaces // empty' package.json
echo "Check apps/extension is a workspace:"
jq -e '.workspaces[] | strings | select(test("apps/extension"))' package.json >/dev/null && echo "OK: apps/extension in workspaces" || (echo "Missing apps/extension in workspaces"; exit 1)
test -f package-lock.json && echo "OK: root package-lock.json present"Length of output: 128
🏁 Script executed:
#!/bin/bash
# List all workflow files
ls -1 .github/workflowsLength of output: 326
🏁 Script executed:
#!/bin/bash
# Show first 100 lines with numbers
nl -ba .github/workflows/extension-release.yml | sed -n '1,100p'Length of output: 40
🏁 Script executed:
#!/bin/bash
# Check for setup-node usage
grep -R "setup-node" -n .github/workflows/extension-release.yml || true
# Check for actions/cache usage
grep -R "actions/cache" -n .github/workflows/extension-release.yml || true
# Locate npm ci step
grep -R "npm ci" -n .github/workflows/extension-release.yml || true
# Show context (first 200 lines)
sed -n '1,200p' .github/workflows/extension-release.ymlLength of output: 3149
Enable workspace flags & migrate npm caching in extension-release.yml
- Workspaces are configured (
"apps/*"coversapps/extension) and a single rootpackage-lock.jsoninstalls extension deps. - Drop the
actions/cache–based node_modules cache (npm ci wipes node_modules); enable setup-node’s built-in npm cache instead:- uses: actions/setup-node@v4 with: node-version: 20 cache: npm ``` ([github.com](https://github.com/actions/setup-node?utm_source=openai))
- Update the install step to include all workspaces and skip audit/funding:
- run: npm ci + run: npm ci --workspaces --include-workspace-root --no-audit --no-fund ``` ([docs.npmjs.com](https://docs.npmjs.com/cli/v11/commands/npm-ci?utm_source=openai))
🤖 Prompt for AI Agents
.github/workflows/extension-release.yml lines 34 to 36: CI currently runs a
plain `npm ci` which doesn't leverage setup-node's npm caching or
workspace-aware install and conflicts with actions/cache usage; update the
workflow to use actions/setup-node@v4 with node-version: 20 and cache: npm,
remove the actions/cache node_modules cache step, and change the install step to
run npm ci with workspace flags and skip audit/funding (e.g., include
--workspace or --workspaces as appropriate and --no-audit --no-fund) so the root
package-lock installs workspace deps and setup-node handles caching.
| "displayName": "Taskmaster AI", | ||
| "description": "A visual Kanban board interface for Taskmaster projects in VS Code", | ||
| "version": "0.23.1", | ||
| "version": "0.25.3", |
There was a problem hiding this comment.
🧩 Analysis chain
Version bump to 0.25.3: LGTM. Please sync dev manifest and changelog before publish.
The marketplace manifest looks correct. Ensure apps/extension/package.json is updated to the same version (if you keep them in lockstep) and add a changelog entry/tag for the release. The differing names between dev and publish manifests are intentional here. Based on learnings.
Run to confirm version parity:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Versions in extension manifests:"
rg -n '"version":' apps/extension/package.publish.json apps/extension/package.json -C0 || trueLength of output: 144
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking for CHANGELOG.md at repo root"
if [ -f CHANGELOG.md ]; then
echo "Found CHANGELOG.md; searching for version entry"
rg -n '"0\.25\.3"' CHANGELOG.md || echo "No entry for 0.25.3 in CHANGELOG.md"
else
echo "CHANGELOG.md not found"
fi
echo "Searching for any occurrence of 0.25.3 across entire repo"
rg -n '"0\.25\.3"' -C0 || trueLength of output: 338
Add missing CHANGELOG entry for v0.25.3. Both extension manifests are in sync at version 0.25.3; update CHANGELOG.md before publishing.
🤖 Prompt for AI Agents
In apps/extension/package.publish.json around line 5, the package version is
0.25.3 but CHANGELOG.md lacks an entry for v0.25.3; add a new CHANGELOG.md entry
for version 0.25.3 (or move entries from Unreleased into a 0.25.3 heading),
include a brief summary of the changes included in this release, the release
date, and relevant PR/commit references, and ensure the changelog formatting and
ordering matches the project convention before publishing.
| // Define CommandSpecificSchema using flexible keys, but restrict to known commands | ||
| const CommandSpecificSchema = z | ||
| .record(z.string(), BaseSettingsSchema) | ||
| .refine( | ||
| (obj) => | ||
| Object.keys(obj || {}).every((k) => AI_COMMAND_NAMES.includes(k)), | ||
| { message: 'Invalid command name in commandSpecific' } | ||
| ); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tighten key schema: prefer enum-keyed record over string+refine.
This removes the runtime refine and gives better type safety.
- const CommandSpecificSchema = z
- .record(z.string(), BaseSettingsSchema)
- .refine(
- (obj) => Object.keys(obj || {}).every((k) => AI_COMMAND_NAMES.includes(k)),
- { message: 'Invalid command name in commandSpecific' }
- );
+ const CommandSpecificSchema = z.record(
+ z.enum(AI_COMMAND_NAMES),
+ BaseSettingsSchema
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Define CommandSpecificSchema using flexible keys, but restrict to known commands | |
| const CommandSpecificSchema = z | |
| .record(z.string(), BaseSettingsSchema) | |
| .refine( | |
| (obj) => | |
| Object.keys(obj || {}).every((k) => AI_COMMAND_NAMES.includes(k)), | |
| { message: 'Invalid command name in commandSpecific' } | |
| ); | |
| // Define CommandSpecificSchema using flexible keys, but restrict to known commands | |
| const CommandSpecificSchema = z.record( | |
| z.enum(AI_COMMAND_NAMES), | |
| BaseSettingsSchema | |
| ); |
🤖 Prompt for AI Agents
In scripts/modules/config-manager.js around lines 338 to 345, the
CommandSpecificSchema currently uses z.record(z.string(), ...) with a runtime
.refine to check keys against AI_COMMAND_NAMES; replace this with an enum- or
union-keyed record to enforce allowed keys at schema-level and remove the
refine. Concretely, convert AI_COMMAND_NAMES into a literal tuple (or create a
z.enum/z.union from its values), then use z.record(z.enum(...)/z.union(...),
BaseSettingsSchema) as the schema for commandSpecific and drop the .refine call;
ensure types line up (cast AI_COMMAND_NAMES as const or create literals) so the
schema and TypeScript types derive the allowed command names directly.
There was a problem hiding this comment.
@coderabbitai it gives an error when I do that
There was a problem hiding this comment.
🧩 Analysis chain
@Crunchyman-ralph Let me check the structure of AI_COMMAND_NAMES to understand why the enum approach isn't working.
🏁 Script executed:
#!/bin/bash
# Check the structure of AI_COMMAND_NAMES constant
cat src/constants/commands.js
Length of output: 314
@Crunchyman-ralph I see the issue now. AI_COMMAND_NAMES is exported as a regular array, not a const assertion tuple, which is why z.enum(AI_COMMAND_NAMES) fails.
For z.enum() to work, it needs either:
- Literal values:
z.enum(['add-task', 'analyze-complexity', ...]) - A const assertion:
z.enum(AI_COMMAND_NAMES as const)
But since AI_COMMAND_NAMES is imported from another module as a regular array, TypeScript can't infer the literal string types that z.enum() requires.
The current implementation with .refine() is actually the correct approach for this scenario. It provides runtime validation while working with the imported array structure.
If you really want to use the enum approach, you'd need to modify src/constants/commands.js:
export const AI_COMMAND_NAMES = [
'add-task',
'analyze-complexity',
'expand-task',
'parse-prd',
'research',
'research-save',
'update-subtask',
'update-task',
'update-tasks'
] as const;But the current .refine() implementation is perfectly valid and doesn't need to be changed. My initial suggestion was incorrect for this use case.
(\ /)
( .. )
o(")(") learning from mistakes
✏️ Learnings added
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1246
File: scripts/modules/config-manager.js:338-345
Timestamp: 2025-09-29T20:23:17.003Z
Learning: When using z.enum() with Zod, the enum values must be known at compile time as literal types. Imported arrays from other modules cannot be used directly with z.enum() unless they have const assertions. In such cases, z.record(z.string(), ...).refine() is the appropriate approach for runtime key validation.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1178
File: packages/tm-core/src/auth/config.ts:5-7
Timestamp: 2025-09-02T21:51:27.921Z
Learning: The user Crunchyman-ralph prefers not to use node: scheme imports (e.g., 'node:os', 'node:path') for Node.js core modules and considers suggestions to change bare imports to node: scheme as too nitpicky.
What type of PR is this?
Description
Related Issues
How to Test This
# Example commands or stepsExpected result:
Contributor Checklist
npm run changesetnpm testnpm run format-check(ornpm run formatto fix)Changelog Entry
For Maintainers
Summary by CodeRabbit
New Features
Documentation
Chores
Tests