Conversation
) Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Fixes #1355
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Fixes #1353
Co-authored-by: Claude <noreply@anthropic.com>
#1326) Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com> Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com> Resolves #1325
🦋 Changeset detectedLatest commit: 560a469 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughAdds interactive model-setup modules and three OpenAI-compatible providers (Z.ai, LM Studio, OpenAI-compatible), threads Subtask types/IDs through CLI/UI, makes workflow transitions await persistence with Steno, migrates commit flow to WorkflowService, introduces baseURL handling for custom providers, and removes the AutopilotCommand CLI module. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI (show.command)
participant TasksDomain
participant Storage
participant UI as Task Detail Component
User->>CLI: task-master show 1.1
CLI->>TasksDomain: get('1.1')
TasksDomain->>Storage: load task 1
Storage-->>TasksDomain: Task with subtasks[]
alt Subtask Found
TasksDomain-->>CLI: { task: Subtask, isSubtask: true, originalTaskId: '1.1' }
else Subtask Not Found
TasksDomain-->>CLI: { task: null, isSubtask: true }
end
CLI->>UI: displayTaskDetails(Subtask, originalTaskId='1.1')
UI->>UI: displayId = originalTaskId || task.id
UI-->>User: Rendered details with ID "1.1"
sequenceDiagram
participant CommitCmd as Commit Command
participant WorkflowSvc as WorkflowService
participant Orchestrator as WorkflowOrchestrator
participant StateManager as StateManager
participant StenoWriter as Steno Writer
CommitCmd->>WorkflowSvc: commit(...)
WorkflowSvc->>Orchestrator: await transition(COMMIT_COMPLETE)
Orchestrator->>StateManager: persist()
StateManager->>StateManager: ensureWriter()
StateManager->>StenoWriter: write(state)
StenoWriter->>StenoWriter: queue & atomic write
StenoWriter-->>StateManager: write complete
StateManager-->>Orchestrator: persist complete
Orchestrator-->>WorkflowSvc: transition resolved
WorkflowSvc-->>CommitCmd: commit completed
sequenceDiagram
actor User
participant CLI as models setup
participant PromptModule
participant CustomProviders
participant Config
User->>CLI: task-master models --setup
CLI->>PromptModule: buildPromptChoices('main')
PromptModule-->>CLI: choices (standard + custom)
CLI->>PromptModule: promptForModel('main')
PromptModule-->>User: interactive searchable prompt
User->>CLI: selects OpenAI-compatible provider
CLI->>CustomProviders: handleCustomProvider('OPENAI_COMPATIBLE')
CustomProviders->>User: prompt for baseURL and model id
CustomProviders-->>CLI: { modelId, provider, baseURL, success: true }
CLI->>Config: setModel('main', modelId, { provider, baseURL })
Config-->>CLI: Success
CLI-->>User: Model configured
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/modules/commands.js (1)
2877-3055: Keep long-form flags in kebab-caseThe CLI guidelines require long-form options to use kebab-case. Introducing
--baseURLbreaks that convention and also leaves us with mixed casing in Commander’s parsed fields (options.baseURL). Please rename the flag to--base-url, update the help text, and switch the option reads tooptions.baseUrlso everything stays consistent.- .option( - '--baseURL <url>', - 'Custom base URL for openai-compatible, lmstudio, or ollama providers (e.g., http://localhost:8000/v1)' - ) + .option( + '--base-url <url>', + 'Custom base URL for openai-compatible, lmstudio, or ollama providers (e.g., http://localhost:8000/v1)' + ) @@ - $ task-master models --set-main qwen3-vl-4b --lmstudio --baseURL http://localhost:8000/v1 # Set LM Studio model with custom base URL - $ task-master models --set-main my-model --openai-compatible --baseURL http://localhost:8000/v1 # Set custom OpenAI-compatible model with custom endpoint + $ task-master models --set-main qwen3-vl-4b --lmstudio --base-url http://localhost:8000/v1 # Set LM Studio model with custom base URL + $ task-master models --set-main my-model --openai-compatible --base-url http://localhost:8000/v1 # Set custom OpenAI-compatible model with custom endpoint @@ - baseURL: options.baseURL + baseURL: options.baseUrl @@ - baseURL: options.baseURL + baseURL: options.baseUrl @@ - baseURL: options.baseURL + baseURL: options.baseUrl
🧹 Nitpick comments (1)
apps/cli/src/commands/models/fetchers.ts (1)
16-60: Add optional OpenRouter auth header and friendlier 401 messagingCould we include the optional
OPENROUTER_API_KEYheader and emit a clearer hint when the API replies with 401? Without the header, teams running against private or rate-limited OpenRouter accounts still hit a 401 even though they’ve configured the key, and the current generic message doesn’t tell them how to recover. I’d lean toward wiring the env var straight intoheaders.Authorization(only when set) and special‑casing the 401 response so the CLI explains that the key is missing or invalid.- const options = { + const headers: Record<string, string> = { + Accept: 'application/json' + }; + const apiKey = process.env.OPENROUTER_API_KEY; + if (apiKey) { + headers.Authorization = `Bearer ${apiKey}`; + } + + const options = { hostname: 'openrouter.ai', path: '/api/v1/models', method: 'GET', - headers: { - Accept: 'application/json' - } + headers @@ - } else { - resolve({ - success: false, - error: `OpenRouter API returned status ${res.statusCode}` - }); - } + } else { + resolve({ + success: false, + error: + res.statusCode === 401 + ? 'OpenRouter API returned 401. Set OPENROUTER_API_KEY or verify that the key has access to /api/v1/models.' + : `OpenRouter API returned status ${res.statusCode}` + }); + }Based on learnings
| "task-master-ai": patch | ||
| --- | ||
|
|
||
| Improve parse PRD schema for better llm model compatiblity |
There was a problem hiding this comment.
Fix typo in changeset description.
Line 5 contains a spelling error: "compatiblity" should be "compatibility".
Apply this diff:
-Improve parse PRD schema for better llm model compatiblity
+Improve parse PRD schema for better llm model compatibility📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Improve parse PRD schema for better llm model compatiblity | |
| --- | |
| "package-name": patch | |
| --- | |
| Improve parse PRD schema for better llm model compatibility |
🧰 Tools
🪛 LanguageTool
[grammar] ~5-~5: Ensure spelling is correct
Context: ...e parse PRD schema for better llm model compatiblity - Fixes #1353
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
In .changeset/fix-metadata-nullable.md around line 5, there's a typo in the
changeset description: replace "compatiblity" with the correct spelling
"compatibility" so the description reads "Improve parse PRD schema for better
llm model compatibility".
| choices = [ | ||
| ...systemOptions, | ||
| new Separator('\n── Standard Models ──'), | ||
| ...roleChoices, | ||
| new Separator('\n── Custom Providers ──'), | ||
| ...customProviderOptions | ||
| ]; | ||
| defaultIndex = | ||
| currentChoiceIndex !== -1 | ||
| ? currentChoiceIndex + systemLength + 1 | ||
| : noChangeOption | ||
| ? 1 | ||
| : 0; | ||
| } |
There was a problem hiding this comment.
Default fallback selects “Cancel” instead of “No change”.
When the current model isn’t in the available list we drop into the fallback branch, but the hardcoded ?: 1 points at the “Cancel” row. Pressing Enter immediately aborts setup even though we still render “No change” as the first option. Please pick the actual index of the control entries instead.
- defaultIndex =
- currentChoiceIndex !== -1
- ? currentChoiceIndex + systemLength + 1
- : noChangeOption
- ? 1
- : 0;
+ if (currentChoiceIndex !== -1) {
+ defaultIndex = currentChoiceIndex + systemLength + 1;
+ } else if (noChangeOption) {
+ defaultIndex = systemOptions.indexOf(noChangeOption);
+ } else {
+ defaultIndex = systemOptions.indexOf(cancelOption);
+ }Committable suggestion skipped: line range outside the PR's diff.
| "typescript": "^5.9.2", | ||
| "@tm/core": "*", | ||
| "task-master-ai": "*" | ||
| "task-master-ai": "0.31.0-rc.0" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify version pinning for local workspace package.
The task-master-ai dependency is pinned to 0.31.0-rc.0 instead of using "*". According to learnings, local workspace packages should use "*" to ensure the local version is always used rather than attempting registry resolution.
Is this intentional for RC testing, or should it remain "*"?
🏁 Script executed:
#!/bin/bash
# Check if other workspace packages use "*" for task-master-ai
rg -g 'package.json' -A2 '"task-master-ai"' | grep -E '(task-master-ai|version)'Length of output: 195
I need to search more broadly to understand how other workspace packages reference task-master-ai and whether this RC pinning is intentional.
🏁 Script executed:
#!/bin/bash
# Find all references to task-master-ai in package.json files across workspace
rg -g 'package.json' '"task-master-ai"' -B1 -A1Length of output: 368
🏁 Script executed:
#!/bin/bash
# Check if task-master-ai is listed as a workspace package
fd 'package.json' -x grep -l '"workspaces"' | head -1 | xargs cat | grep -A30 '"workspaces"'Length of output: 1675
I need to verify whether task-master-ai is a published package or internal workspace package to determine if the explicit version pinning is intentional.
🏁 Script executed:
#!/bin/bash
# Check root package.json to see if task-master-ai is private or published
cat package.json | grep -E '("name"|"private"|"publishConfig")'Length of output: 26
🏁 Script executed:
#!/bin/bash
# Look for npm or release configuration
ls -la | grep -E '^\.|package|npm|release' && echo "---" && fd -e 'npmrc|npmignore|gitignore' -x catLength of output: 262
---
🏁 Script executed:
```shell
#!/bin/bash
# Get full root package.json to see private field and package structure
cat package.json
Length of output: 5056
Use "*" for task-master-ai to match monorepo workspace pattern.
The task-master-ai package is part of the workspace (root-level package included in "workspaces") and should follow the same dependency constraint pattern as other workspace packages like @tm/core, which correctly uses "*". Per your established learning, workspace packages should use "*" to ensure the local version is consumed rather than attempting registry resolution. The explicit 0.31.0-rc.0 pin creates an inconsistency within the monorepo dependency model.
Change apps/extension/package.json line 278 from "task-master-ai": "0.31.0-rc.0" to "task-master-ai": "*".
🤖 Prompt for AI Agents
In apps/extension/package.json around line 278, the dependency for
"task-master-ai" is pinned to "0.31.0-rc.0" which conflicts with the monorepo
workspace pattern; change the version string to "*" so the local workspace
package is used (replace the current "task-master-ai": "0.31.0-rc.0" entry with
"task-master-ai": "*").
| if ( | ||
| computedBaseURL && | ||
| (determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE || | ||
| determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO || | ||
| determinedProvider === CUSTOM_PROVIDERS.OLLAMA) | ||
| ) { | ||
| currentConfig.models[role].baseURL = computedBaseURL; | ||
| } else { | ||
| // Remove baseURL when switching to a provider that doesn't need it | ||
| delete currentConfig.models[role].baseURL; | ||
| } |
There was a problem hiding this comment.
Preserve required baseURL for OPENAI-compatible/LM Studio/Ollama models
When the target model already exists in getAvailableModels (e.g. one of the seeded OPENAI_COMPATIBLE recommendations), the branch above sets determinedProvider without ever populating computedBaseURL. We then fall into this else path, delete baseURL, and persist a config that points at an OpenAI-compatible (or LM Studio/Ollama) provider with no endpoint. The very next call to the AI service blows up because it builds a request against undefined.
You can reproduce today by switching between two OPENAI-compatible models that ship in supported-models.json: the CLI doesn’t collect a new baseURL, setModel accepts the change, strips the existing base URL, and the provider becomes unusable.
Please ensure we always carry forward (or default) the base URL for providers that require one before we hit this block, regardless of whether modelData was found.
- if (
- computedBaseURL &&
- (determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE ||
- determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO ||
- determinedProvider === CUSTOM_PROVIDERS.OLLAMA)
- ) {
- currentConfig.models[role].baseURL = computedBaseURL;
- } else {
- // Remove baseURL when switching to a provider that doesn't need it
- delete currentConfig.models[role].baseURL;
- }
+ if (!computedBaseURL) {
+ if (determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE) {
+ const existingBaseURL = getBaseUrlForRole(role, projectRoot);
+ if (existingBaseURL) {
+ computedBaseURL = existingBaseURL;
+ } else {
+ throw new Error(
+ 'Base URL is required for OpenAI-compatible providers. Please provide a baseURL.'
+ );
+ }
+ } else if (determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO) {
+ computedBaseURL =
+ getBaseUrlForRole(role, projectRoot) || 'http://localhost:1234/v1';
+ } else if (determinedProvider === CUSTOM_PROVIDERS.OLLAMA) {
+ computedBaseURL =
+ getBaseUrlForRole(role, projectRoot) || 'http://localhost:11434/api';
+ }
+ }
+
+ if (
+ determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE ||
+ determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO ||
+ determinedProvider === CUSTOM_PROVIDERS.OLLAMA
+ ) {
+ currentConfig.models[role].baseURL = computedBaseURL;
+ } else {
+ delete currentConfig.models[role].baseURL;
+ }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In scripts/modules/task-manager/models.js around lines 688 to 698,
computedBaseURL can be empty when determinedProvider is OPENAI_COMPATIBLE,
LMSTUDIO, or OLLAMA which causes the code to delete an existing baseURL and
persist a broken config; ensure computedBaseURL is preserved or defaulted before
this branch by setting computedBaseURL = currentConfig.models[role]?.baseURL ||
getDefaultBaseURLForProvider(determinedProvider) when the provider requires an
endpoint (OPENAI_COMPATIBLE/LMSTUDIO/OLLAMA), so the if branch sees a valid
value and the else no longer strips a required baseURL.
| supportsStructuredOutputs: true | ||
| // LM Studio only supports json_schema mode, not json_object mode | ||
| // Disable native structured outputs to use alternative strategies | ||
| }); |
There was a problem hiding this comment.
Clarify the contradictory comment about structured outputs.
Line 34 sets supportsStructuredOutputs: true, but the comment on lines 35-36 states "Disable native structured outputs to use alternative strategies." This is contradictory and creates confusion about the intended behavior.
Based on the comment stating LM Studio only supports json_schema mode, setting supportsStructuredOutputs: true appears correct. However, the comment needs updating to accurately reflect the configuration.
Apply this diff to fix the comment:
super({
name: 'LM Studio',
apiKeyEnvVar: 'LMSTUDIO_API_KEY',
requiresApiKey: false, // Local server, no API key needed
defaultBaseURL: 'http://localhost:1234/v1',
supportsStructuredOutputs: true
- // LM Studio only supports json_schema mode, not json_object mode
- // Disable native structured outputs to use alternative strategies
+ // LM Studio supports json_schema mode for structured outputs
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| supportsStructuredOutputs: true | |
| // LM Studio only supports json_schema mode, not json_object mode | |
| // Disable native structured outputs to use alternative strategies | |
| }); | |
| supportsStructuredOutputs: true | |
| // LM Studio supports json_schema mode for structured outputs | |
| }); |
🤖 Prompt for AI Agents
In src/ai-providers/lmstudio.js around lines 34 to 37, the comment contradicts
the code: supportsStructuredOutputs is set to true but the comment says "Disable
native structured outputs"; update the comment to reflect the intended behavior
— that LM Studio supports structured outputs only in json_schema mode (not
json_object), so we enable supportsStructuredOutputs and note that json_schema
mode is used and json_object is unsupported; keep the code as is and replace the
contradictory comment with a concise explanation mentioning json_schema-only
support.
| it('should disable structured outputs (LM Studio only supports json_schema mode)', () => { | ||
| expect(provider.supportsStructuredOutputs).toBe(true); | ||
| }); |
There was a problem hiding this comment.
Fix the misleading test description.
The test description states "should disable structured outputs" but the assertion expects supportsStructuredOutputs to be true. This contradicts the assertion and mirrors the confusing comment in src/ai-providers/lmstudio.js.
Apply this diff to correct the description:
- it('should disable structured outputs (LM Studio only supports json_schema mode)', () => {
+ it('should support structured outputs (LM Studio supports json_schema mode)', () => {
expect(provider.supportsStructuredOutputs).toBe(true);
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should disable structured outputs (LM Studio only supports json_schema mode)', () => { | |
| expect(provider.supportsStructuredOutputs).toBe(true); | |
| }); | |
| it('should support structured outputs (LM Studio supports json_schema mode)', () => { | |
| expect(provider.supportsStructuredOutputs).toBe(true); | |
| }); |
🤖 Prompt for AI Agents
In tests/unit/ai-providers/lmstudio.test.js around lines 27 to 29, the test
description incorrectly says "should disable structured outputs" while the
assertion expects provider.supportsStructuredOutputs toBe(true); update the test
description to accurately reflect the expectation (e.g., "should enable
structured outputs (LM Studio supports json_schema mode)") so the description
matches the assertion and removes the contradiction.
What type of PR is this?
Description
Related Issues
How to Test This
# Example commands or stepsExpected result:
Contributor Checklist
npm run changesetnpm testnpm run format-check(ornpm run formatto fix)Changelog Entry
For Maintainers
Summary by CodeRabbit
New Features
Bug Fixes
Documentation