Skip to content

Release 0.31.0#1366

Merged
Crunchyman-ralph merged 10 commits intomainfrom
next
Nov 1, 2025
Merged

Release 0.31.0#1366
Crunchyman-ralph merged 10 commits intomainfrom
next

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Oct 31, 2025

What type of PR is this?

  • 🐛 Bug fix
  • ✨ Feature
  • 🔌 Integration
  • 📝 Docs
  • 🧹 Refactor
  • Other:

Description

Related Issues

How to Test This

# Example commands or steps

Expected result:

Contributor Checklist

  • Created changeset: npm run changeset
  • Tests pass: npm test
  • Format check passes: npm run format-check (or npm run format to fix)
  • Addressed CodeRabbit comments (if any)
  • Linked related issues (if any)
  • Manually tested the changes

Changelog Entry


For Maintainers

  • PR title follows conventional commits
  • Target branch correct
  • Labels added
  • Milestone assigned (if applicable)

Summary by CodeRabbit

  • New Features

    • Custom OpenAI-compatible provider support, configurable endpoints and API key options
    • Interactive model setup UI and improved model-selection workflows (including fetch/validation)
    • Native Z.ai GLM model support and LM Studio offline/local-model integration
  • Bug Fixes

    • Subtask IDs now display full compound notation (e.g., 104.1)
    • Improved PRD schema parsing for LLM compatibility
    • MCP stability and autopilot state persistence fixes
  • Documentation

    • Added Gemini CLI setup and updated Available Models listings

Crunchyman-ralph and others added 9 commits October 29, 2025 16:20
)

Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Fixes #1355
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Fixes #1353
Co-authored-by: Claude <noreply@anthropic.com>
#1326)

Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
Co-authored-by: claude[bot] <209825114+claude[bot]@users.noreply.github.com>
Resolves #1325
@changeset-bot
Copy link

changeset-bot bot commented Oct 31, 2025

🦋 Changeset detected

Latest commit: 560a469

The changes in this PR will be included in the next version bump.

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 31, 2025

Walkthrough

Adds interactive model-setup modules and three OpenAI-compatible providers (Z.ai, LM Studio, OpenAI-compatible), threads Subtask types/IDs through CLI/UI, makes workflow transitions await persistence with Steno, migrates commit flow to WorkflowService, introduces baseURL handling for custom providers, and removes the AutopilotCommand CLI module.

Changes

Cohort / File(s) Summary
Changelog & Release Configuration
.changeset/*, CHANGELOG.md, package.json
Adds multiple changesets and a pre-release (0.31.0-rc.0), bumps package version, and records new features/patches including provider support and PRD/schema fixes.
Model Setup Infrastructure
Model types & UI: apps/cli/src/commands/models/types.ts, apps/cli/src/commands/models/prompts.ts
Providers & handlers: apps/cli/src/commands/models/custom-providers.ts, apps/cli/src/commands/models/fetchers.ts, apps/cli/src/commands/models/setup.ts, apps/cli/src/commands/models/index.ts
New CLI modules for interactive model selection: typed model metadata, searchable prompts, fetching/validation for OpenRouter/Ollama, and custom-provider handling (baseURL, validation, interactive flow).
AI Provider Implementations
src/ai-providers/openai-compatible.js, src/ai-providers/zai.js, src/ai-providers/lmstudio.js, src/ai-providers/index.js
Add OpenAICompatibleProvider base class and provider subclasses for Z.ai and LM Studio; export new providers from the providers index.
Model Management Bridge
apps/cli/src/lib/model-management.ts, apps/cli/src/index.ts, apps/extension/package.json
TypeScript bridge wrapping JS model/config modules, new typed interfaces and exports, and CLI export changes to expose interactive setup; extension pinned to 0.31.0-rc.0.
Model Config Backend & MCP Tooling
scripts/modules/task-manager/models.js, mcp-server/src/core/direct-functions/models.js, mcp-server/src/tools/models.js
Backend model manager and MCP models tool extended to support per-role baseURL, new openai-compatible flag, and preserve/determine baseURLs on setModel flows.
Supported Models & Docs
scripts/modules/supported-models.json, docs/models.md
Adds Z.ai GLM models (glm-4.6, glm-4.5, glm-4.5-air, glm-4.5v) to supported models and updates docs timestamp.
Provider Constants & Exports
packages/tm-core/src/common/constants/providers.ts, packages/tm-core/src/common/constants/index.ts
Adds zai to validated providers and LMSTUDIO/OPENAI_COMPATIBLE to custom providers, introduces typed aliases (ValidatedProvider, CustomProvider, Provider) and re-exports providers from tm-core.
AI Services Registry
scripts/modules/ai-services-unified.js, scripts/modules/commands.js, scripts/modules/config-manager.js
Registers new provider instances (zai, lmstudio, openai-compatible) in unified services; updates scripts to import provider constants from @tm/core and uses runInteractiveSetup from CLI.
Subtask Types & Display
packages/tm-core/src/modules/tasks/tasks-domain.ts, apps/cli/src/commands/show.command.ts, apps/cli/src/ui/components/task-detail.component.ts, apps/cli/src/utils/ui.ts, apps/mcp/src/tools/tasks/get-task.tool.ts
Broadened types to include Subtask; TasksDomain.get() returns a discriminated union including Subtask; originalTaskId propagated through CLI/UI to preserve compound IDs (e.g., 104.1).
Autopilot Command Removal
apps/cli/src/commands/autopilot.command.ts
Removed complete AutopilotCommand module and its exported interfaces/classes.
Commit Command Refactor
apps/cli/src/commands/autopilot/commit.command.ts
Switched orchestration to WorkflowService APIs, replaced factories with new GitAdapter() and new CommitMessageGenerator(), and sources status/context from the service.
Workflow Async & Persistence
packages/tm-core/src/modules/workflow/orchestrators/workflow-orchestrator.ts, packages/tm-core/src/modules/workflow/services/workflow.service.ts, packages/tm-core/src/modules/workflow/managers/workflow-state-manager.ts
Made orchestrator transitions async and awaited across service flows; switched state persistence to use a Steno Writer for atomic, queued writes.
PRD Parse Schema
scripts/modules/task-manager/parse-prd/parse-prd-config.js, src/prompts/parse-prd.json
Updated PRD response schema to allow metadata to be nullable/optional and adjusted prompts accordingly.
CLI & UI Adjustments
apps/cli/src/commands/models/*, apps/cli/src/utils/ui.ts, apps/cli/src/ui/components/*
CLI/UI adapted to accept `(Task
Tests
tests/unit/*, tests/integration/*
New and updated unit/integration tests for OpenAI-compatible, Z.ai, LM Studio providers, models baseURL handling, ai-services-unified changes, and Gemini profile fileMap adjustments.
Gemini Profile & Docs
src/profiles/gemini.js, assets/GEMINI.md, tests
Gemini profile fileMap updated (AGENT.mdAGENTS.md; add GEMINI.md) and new Gemini doc added; tests updated accordingly.
PRD & Documentation
CLAUDE.md, CHANGELOG.md
Expanded testing/documentation guidelines and updated CHANGELOG with 0.31.0-rc.0 entries.
Dependencies
package.json, packages/tm-core/package.json
Adds @ai-sdk/openai-compatible and steno dependencies; tm-core adds steno to dependencies.
Minor cleanup
scripts/modules/task-manager/analyze-task-complexity.js
Removed unused imports.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant CLI as CLI (show.command)
    participant TasksDomain
    participant Storage
    participant UI as Task Detail Component

    User->>CLI: task-master show 1.1
    CLI->>TasksDomain: get('1.1')
    TasksDomain->>Storage: load task 1
    Storage-->>TasksDomain: Task with subtasks[]
    alt Subtask Found
        TasksDomain-->>CLI: { task: Subtask, isSubtask: true, originalTaskId: '1.1' }
    else Subtask Not Found
        TasksDomain-->>CLI: { task: null, isSubtask: true }
    end
    CLI->>UI: displayTaskDetails(Subtask, originalTaskId='1.1')
    UI->>UI: displayId = originalTaskId || task.id
    UI-->>User: Rendered details with ID "1.1"
Loading
sequenceDiagram
    participant CommitCmd as Commit Command
    participant WorkflowSvc as WorkflowService
    participant Orchestrator as WorkflowOrchestrator
    participant StateManager as StateManager
    participant StenoWriter as Steno Writer

    CommitCmd->>WorkflowSvc: commit(...)
    WorkflowSvc->>Orchestrator: await transition(COMMIT_COMPLETE)
    Orchestrator->>StateManager: persist()
    StateManager->>StateManager: ensureWriter()
    StateManager->>StenoWriter: write(state)
    StenoWriter->>StenoWriter: queue & atomic write
    StenoWriter-->>StateManager: write complete
    StateManager-->>Orchestrator: persist complete
    Orchestrator-->>WorkflowSvc: transition resolved
    WorkflowSvc-->>CommitCmd: commit completed
Loading
sequenceDiagram
    actor User
    participant CLI as models setup
    participant PromptModule
    participant CustomProviders
    participant Config

    User->>CLI: task-master models --setup
    CLI->>PromptModule: buildPromptChoices('main')
    PromptModule-->>CLI: choices (standard + custom)
    CLI->>PromptModule: promptForModel('main')
    PromptModule-->>User: interactive searchable prompt
    User->>CLI: selects OpenAI-compatible provider
    CLI->>CustomProviders: handleCustomProvider('OPENAI_COMPATIBLE')
    CustomProviders->>User: prompt for baseURL and model id
    CustomProviders-->>CLI: { modelId, provider, baseURL, success: true }
    CLI->>Config: setModel('main', modelId, { provider, baseURL })
    Config-->>CLI: Success
    CLI-->>User: Model configured
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

  • Pay special attention to:
    • TasksDomain.get() discriminated-union exhaustiveness and all consuming call-sites.
    • All new async transitions and their await sites (WorkflowOrchestrator, WorkflowService).
    • Persistence migration to Steno in WorkflowStateManager (concurrency, atomicity).
    • BaseURL handling for OPENAI_COMPATIBLE and LMSTUDIO across CLI, scripts, and MCP tool persistence.
    • Removal of AutopilotCommand — confirm expected behavior is preserved via service-based flows.
    • Tests for new providers and models baseURL handling to ensure behavior matches runtime logic.

Possibly related PRs

Suggested reviewers

  • eyaltoledano

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Release 0.31.0" is directly and fully related to the changeset. The pull request contains a comprehensive bundle of changes including multiple new features (custom OpenAI-compatible providers, Z.ai GLM model support, LM Studio integration), bug fixes (subtask ID display, MCP stability), and infrastructure updates (provider implementations, workflow service improvements). The package version is bumped to 0.31.0-rc.0, CHANGELOG.md includes a new 0.31.0-rc.0 release block, and multiple changeset files document the release contents. The title is specific, clear, and concise—stating the version number makes it immediately understandable that this is a coordinated release PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch next

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f003c1 and 560a469.

📒 Files selected for processing (1)
  • .changeset/pre.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .changeset/pre.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: update-docs

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/modules/commands.js (1)

2877-3055: Keep long-form flags in kebab-case

The CLI guidelines require long-form options to use kebab-case. Introducing --baseURL breaks that convention and also leaves us with mixed casing in Commander’s parsed fields (options.baseURL). Please rename the flag to --base-url, update the help text, and switch the option reads to options.baseUrl so everything stays consistent.

-		.option(
-			'--baseURL <url>',
-			'Custom base URL for openai-compatible, lmstudio, or ollama providers (e.g., http://localhost:8000/v1)'
-		)
+		.option(
+			'--base-url <url>',
+			'Custom base URL for openai-compatible, lmstudio, or ollama providers (e.g., http://localhost:8000/v1)'
+		)
@@
-  $ task-master models --set-main qwen3-vl-4b --lmstudio --baseURL http://localhost:8000/v1 # Set LM Studio model with custom base URL
-  $ task-master models --set-main my-model --openai-compatible --baseURL http://localhost:8000/v1 # Set custom OpenAI-compatible model with custom endpoint
+  $ task-master models --set-main qwen3-vl-4b --lmstudio --base-url http://localhost:8000/v1 # Set LM Studio model with custom base URL
+  $ task-master models --set-main my-model --openai-compatible --base-url http://localhost:8000/v1 # Set custom OpenAI-compatible model with custom endpoint
@@
-					baseURL: options.baseURL
+					baseURL: options.baseUrl
@@
-					baseURL: options.baseURL
+					baseURL: options.baseUrl
@@
-					baseURL: options.baseURL
+					baseURL: options.baseUrl
🧹 Nitpick comments (1)
apps/cli/src/commands/models/fetchers.ts (1)

16-60: Add optional OpenRouter auth header and friendlier 401 messaging

Could we include the optional OPENROUTER_API_KEY header and emit a clearer hint when the API replies with 401? Without the header, teams running against private or rate-limited OpenRouter accounts still hit a 401 even though they’ve configured the key, and the current generic message doesn’t tell them how to recover. I’d lean toward wiring the env var straight into headers.Authorization (only when set) and special‑casing the 401 response so the CLI explains that the key is missing or invalid.

-	const options = {
+	const headers: Record<string, string> = {
+		Accept: 'application/json'
+	};
+	const apiKey = process.env.OPENROUTER_API_KEY;
+	if (apiKey) {
+		headers.Authorization = `Bearer ${apiKey}`;
+	}
+
+	const options = {
 		hostname: 'openrouter.ai',
 		path: '/api/v1/models',
 		method: 'GET',
-		headers: {
-			Accept: 'application/json'
-		}
+		headers
@@
-				} else {
-					resolve({
-						success: false,
-						error: `OpenRouter API returned status ${res.statusCode}`
-					});
-				}
+				} else {
+					resolve({
+						success: false,
+						error:
+							res.statusCode === 401
+								? 'OpenRouter API returned 401. Set OPENROUTER_API_KEY or verify that the key has access to /api/v1/models.'
+								: `OpenRouter API returned status ${res.statusCode}`
+					});
+				}

Based on learnings

"task-master-ai": patch
---

Improve parse PRD schema for better llm model compatiblity
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo in changeset description.

Line 5 contains a spelling error: "compatiblity" should be "compatibility".

Apply this diff:

-Improve parse PRD schema for better llm model compatiblity
+Improve parse PRD schema for better llm model compatibility
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Improve parse PRD schema for better llm model compatiblity
---
"package-name": patch
---
Improve parse PRD schema for better llm model compatibility
🧰 Tools
🪛 LanguageTool

[grammar] ~5-~5: Ensure spelling is correct
Context: ...e parse PRD schema for better llm model compatiblity - Fixes #1353

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
In .changeset/fix-metadata-nullable.md around line 5, there's a typo in the
changeset description: replace "compatiblity" with the correct spelling
"compatibility" so the description reads "Improve parse PRD schema for better
llm model compatibility".

Comment on lines +120 to +133
choices = [
...systemOptions,
new Separator('\n── Standard Models ──'),
...roleChoices,
new Separator('\n── Custom Providers ──'),
...customProviderOptions
];
defaultIndex =
currentChoiceIndex !== -1
? currentChoiceIndex + systemLength + 1
: noChangeOption
? 1
: 0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Default fallback selects “Cancel” instead of “No change”.

When the current model isn’t in the available list we drop into the fallback branch, but the hardcoded ?: 1 points at the “Cancel” row. Pressing Enter immediately aborts setup even though we still render “No change” as the first option. Please pick the actual index of the control entries instead.

-		defaultIndex =
-			currentChoiceIndex !== -1
-				? currentChoiceIndex + systemLength + 1
-				: noChangeOption
-					? 1
-					: 0;
+		if (currentChoiceIndex !== -1) {
+			defaultIndex = currentChoiceIndex + systemLength + 1;
+		} else if (noChangeOption) {
+			defaultIndex = systemOptions.indexOf(noChangeOption);
+		} else {
+			defaultIndex = systemOptions.indexOf(cancelOption);
+		}

Committable suggestion skipped: line range outside the PR's diff.

"typescript": "^5.9.2",
"@tm/core": "*",
"task-master-ai": "*"
"task-master-ai": "0.31.0-rc.0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Verify version pinning for local workspace package.

The task-master-ai dependency is pinned to 0.31.0-rc.0 instead of using "*". According to learnings, local workspace packages should use "*" to ensure the local version is always used rather than attempting registry resolution.

Is this intentional for RC testing, or should it remain "*"?


🏁 Script executed:

#!/bin/bash
# Check if other workspace packages use "*" for task-master-ai
rg -g 'package.json' -A2 '"task-master-ai"' | grep -E '(task-master-ai|version)'

Length of output: 195


I need to search more broadly to understand how other workspace packages reference task-master-ai and whether this RC pinning is intentional.





🏁 Script executed:

#!/bin/bash
# Find all references to task-master-ai in package.json files across workspace
rg -g 'package.json' '"task-master-ai"' -B1 -A1

Length of output: 368


🏁 Script executed:

#!/bin/bash
# Check if task-master-ai is listed as a workspace package
fd 'package.json' -x grep -l '"workspaces"' | head -1 | xargs cat | grep -A30 '"workspaces"'

Length of output: 1675


I need to verify whether task-master-ai is a published package or internal workspace package to determine if the explicit version pinning is intentional.





🏁 Script executed:

#!/bin/bash
# Check root package.json to see if task-master-ai is private or published
cat package.json | grep -E '("name"|"private"|"publishConfig")'

Length of output: 26


🏁 Script executed:

#!/bin/bash
# Look for npm or release configuration
ls -la | grep -E '^\.|package|npm|release' && echo "---" && fd -e 'npmrc|npmignore|gitignore' -x cat

Length of output: 262



---

🏁 Script executed:

```shell
#!/bin/bash
# Get full root package.json to see private field and package structure
cat package.json

Length of output: 5056


Use "*" for task-master-ai to match monorepo workspace pattern.

The task-master-ai package is part of the workspace (root-level package included in "workspaces") and should follow the same dependency constraint pattern as other workspace packages like @tm/core, which correctly uses "*". Per your established learning, workspace packages should use "*" to ensure the local version is consumed rather than attempting registry resolution. The explicit 0.31.0-rc.0 pin creates an inconsistency within the monorepo dependency model.

Change apps/extension/package.json line 278 from "task-master-ai": "0.31.0-rc.0" to "task-master-ai": "*".

🤖 Prompt for AI Agents
In apps/extension/package.json around line 278, the dependency for
"task-master-ai" is pinned to "0.31.0-rc.0" which conflicts with the monorepo
workspace pattern; change the version string to "*" so the local workspace
package is used (replace the current "task-master-ai": "0.31.0-rc.0" entry with
"task-master-ai": "*").

Comment on lines +688 to +698
if (
computedBaseURL &&
(determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE ||
determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO ||
determinedProvider === CUSTOM_PROVIDERS.OLLAMA)
) {
currentConfig.models[role].baseURL = computedBaseURL;
} else {
// Remove baseURL when switching to a provider that doesn't need it
delete currentConfig.models[role].baseURL;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Preserve required baseURL for OPENAI-compatible/LM Studio/Ollama models

When the target model already exists in getAvailableModels (e.g. one of the seeded OPENAI_COMPATIBLE recommendations), the branch above sets determinedProvider without ever populating computedBaseURL. We then fall into this else path, delete baseURL, and persist a config that points at an OpenAI-compatible (or LM Studio/Ollama) provider with no endpoint. The very next call to the AI service blows up because it builds a request against undefined.

You can reproduce today by switching between two OPENAI-compatible models that ship in supported-models.json: the CLI doesn’t collect a new baseURL, setModel accepts the change, strips the existing base URL, and the provider becomes unusable.

Please ensure we always carry forward (or default) the base URL for providers that require one before we hit this block, regardless of whether modelData was found.

-	if (
-		computedBaseURL &&
-		(determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE ||
-			determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO ||
-			determinedProvider === CUSTOM_PROVIDERS.OLLAMA)
-	) {
-		currentConfig.models[role].baseURL = computedBaseURL;
-	} else {
-		// Remove baseURL when switching to a provider that doesn't need it
-		delete currentConfig.models[role].baseURL;
-	}
+	if (!computedBaseURL) {
+		if (determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE) {
+			const existingBaseURL = getBaseUrlForRole(role, projectRoot);
+			if (existingBaseURL) {
+				computedBaseURL = existingBaseURL;
+			} else {
+				throw new Error(
+					'Base URL is required for OpenAI-compatible providers. Please provide a baseURL.'
+				);
+			}
+		} else if (determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO) {
+			computedBaseURL =
+				getBaseUrlForRole(role, projectRoot) || 'http://localhost:1234/v1';
+		} else if (determinedProvider === CUSTOM_PROVIDERS.OLLAMA) {
+			computedBaseURL =
+				getBaseUrlForRole(role, projectRoot) || 'http://localhost:11434/api';
+		}
+	}
+
+	if (
+		determinedProvider === CUSTOM_PROVIDERS.OPENAI_COMPATIBLE ||
+		determinedProvider === CUSTOM_PROVIDERS.LMSTUDIO ||
+		determinedProvider === CUSTOM_PROVIDERS.OLLAMA
+	) {
+		currentConfig.models[role].baseURL = computedBaseURL;
+	} else {
+		delete currentConfig.models[role].baseURL;
+	}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In scripts/modules/task-manager/models.js around lines 688 to 698,
computedBaseURL can be empty when determinedProvider is OPENAI_COMPATIBLE,
LMSTUDIO, or OLLAMA which causes the code to delete an existing baseURL and
persist a broken config; ensure computedBaseURL is preserved or defaulted before
this branch by setting computedBaseURL = currentConfig.models[role]?.baseURL ||
getDefaultBaseURLForProvider(determinedProvider) when the provider requires an
endpoint (OPENAI_COMPATIBLE/LMSTUDIO/OLLAMA), so the if branch sees a valid
value and the else no longer strips a required baseURL.

Comment on lines +34 to +37
supportsStructuredOutputs: true
// LM Studio only supports json_schema mode, not json_object mode
// Disable native structured outputs to use alternative strategies
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify the contradictory comment about structured outputs.

Line 34 sets supportsStructuredOutputs: true, but the comment on lines 35-36 states "Disable native structured outputs to use alternative strategies." This is contradictory and creates confusion about the intended behavior.

Based on the comment stating LM Studio only supports json_schema mode, setting supportsStructuredOutputs: true appears correct. However, the comment needs updating to accurately reflect the configuration.

Apply this diff to fix the comment:

 		super({
 			name: 'LM Studio',
 			apiKeyEnvVar: 'LMSTUDIO_API_KEY',
 			requiresApiKey: false, // Local server, no API key needed
 			defaultBaseURL: 'http://localhost:1234/v1',
 			supportsStructuredOutputs: true
-			// LM Studio only supports json_schema mode, not json_object mode
-			// Disable native structured outputs to use alternative strategies
+			// LM Studio supports json_schema mode for structured outputs
 		});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
supportsStructuredOutputs: true
// LM Studio only supports json_schema mode, not json_object mode
// Disable native structured outputs to use alternative strategies
});
supportsStructuredOutputs: true
// LM Studio supports json_schema mode for structured outputs
});
🤖 Prompt for AI Agents
In src/ai-providers/lmstudio.js around lines 34 to 37, the comment contradicts
the code: supportsStructuredOutputs is set to true but the comment says "Disable
native structured outputs"; update the comment to reflect the intended behavior
— that LM Studio supports structured outputs only in json_schema mode (not
json_object), so we enable supportsStructuredOutputs and note that json_schema
mode is used and json_object is unsupported; keep the code as is and replace the
contradictory comment with a concise explanation mentioning json_schema-only
support.

Comment on lines +27 to +29
it('should disable structured outputs (LM Studio only supports json_schema mode)', () => {
expect(provider.supportsStructuredOutputs).toBe(true);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the misleading test description.

The test description states "should disable structured outputs" but the assertion expects supportsStructuredOutputs to be true. This contradicts the assertion and mirrors the confusing comment in src/ai-providers/lmstudio.js.

Apply this diff to correct the description:

-		it('should disable structured outputs (LM Studio only supports json_schema mode)', () => {
+		it('should support structured outputs (LM Studio supports json_schema mode)', () => {
 			expect(provider.supportsStructuredOutputs).toBe(true);
 		});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should disable structured outputs (LM Studio only supports json_schema mode)', () => {
expect(provider.supportsStructuredOutputs).toBe(true);
});
it('should support structured outputs (LM Studio supports json_schema mode)', () => {
expect(provider.supportsStructuredOutputs).toBe(true);
});
🤖 Prompt for AI Agents
In tests/unit/ai-providers/lmstudio.test.js around lines 27 to 29, the test
description incorrectly says "should disable structured outputs" while the
assertion expects provider.supportsStructuredOutputs toBe(true); update the test
description to accurately reflect the expectation (e.g., "should enable
structured outputs (LM Studio supports json_schema mode)") so the description
matches the assertion and removes the contradiction.

@Crunchyman-ralph Crunchyman-ralph merged commit 98df39c into main Nov 1, 2025
24 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Nov 1, 2025
16 tasks
sfc-gh-dflippo pushed a commit to sfc-gh-dflippo/task-master-ai that referenced this pull request Dec 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants