Skip to content

feat(provider): add ChatGPT OAuth (gpt-5) subscription provider via AI SDK v4#1158

Closed
ben-vargas wants to merge 1 commit intoeyaltoledano:nextfrom
ben-vargas:feat-codex-cli
Closed

feat(provider): add ChatGPT OAuth (gpt-5) subscription provider via AI SDK v4#1158
ben-vargas wants to merge 1 commit intoeyaltoledano:nextfrom
ben-vargas:feat-codex-cli

Conversation

@ben-vargas
Copy link
Contributor

@ben-vargas ben-vargas commented Aug 25, 2025

Summary

  • Adds support for OpenAI GPT-5 via ChatGPT OAuth (no API key required)
  • Centralizes JSON extraction logic for better reliability across providers
  • Properly handles provider-specific parameter limitations

Key Features

ChatGPT OAuth Provider

  • New provider using ai-sdk-provider-chatgpt-oauth package
  • Uses ChatGPT Plus/Pro/Teams subscription instead of API keys
  • OAuth tokens managed via Codex CLI (npx @openai/codex login)
  • Supports reasoning controls (reasoningEffort, reasoningSummary)
  • Automatically excludes unsupported parameters (maxTokens, temperature)
  • Added --chatgpt-oauth CLI flag for explicit provider selection

JSON Extraction Refactoring

  • Centralized tolerant JSON extraction using jsonc-parser
  • Fallback to jsonrepair for salvageable outputs
  • Reused across multiple providers for consistency

Configuration Improvements

  • Automatically adds reasoning defaults when ChatGPT OAuth is selected
  • Excludes unsupported parameters to prevent confusion
  • Clear documentation of supported/unsupported features

Technical Details

  • Properly handles ChatGPT backend requirements (system→user message demotion)
  • Robust object generation with JSON validation via Zod
  • Zero cost tracking (subscription-based model)
  • Full test coverage and CI compliance

Documentation

  • Comprehensive usage guide at docs/examples/chatgpt-oauth-usage.md
  • Clear explanation of supported parameters and limitations
  • Setup instructions for authentication

Test Plan

  • All existing tests pass (1244 tests)
  • Code formatting verified (Biome)
  • Manual testing with task-master models --set-main gpt-5 --chatgpt-oauth
  • Config correctly excludes unsupported parameters when ChatGPT OAuth is selected
  • Reasoning controls properly added to config for ChatGPT OAuth provider
  • CLI flag --chatgpt-oauth works correctly for provider selection

Breaking Changes

None - this is a new feature that doesn't affect existing functionality.

Related Issues

Adds ChatGPT OAuth support as discussed in community feedback about subscription-based model access.

Summary by CodeRabbit

  • New Features

    • Added ChatGPT OAuth provider (gpt-5) as a no-API-key option for main/fallback roles with optional reasoning controls and structured JSON outputs.
    • New CLI flag to select ChatGPT OAuth models and an interactive model setup flow.
  • Documentation

    • README updates and a new usage guide documenting ChatGPT OAuth setup, auth options, and usage notes.
  • Bug Fixes

    • More tolerant JSON extraction to improve parsing of free-form model responses.
  • Tests

    • Unit tests added for JSON extraction and ChatGPT OAuth provider behavior.
  • Chores

    • Supported models updated and new ChatGPT OAuth dependency added.

@changeset-bot
Copy link

changeset-bot bot commented Aug 25, 2025

🦋 Changeset detected

Latest commit: 8e7a53f

The changes in this PR will be included in the next version bump.

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 25, 2025

Walkthrough

Adds a ChatGPT OAuth provider (chatgpt-oauth/gpt-5) with no API key, integrates it into providers, CLI, model resolution, and unified services, forwards reasoning options, adds tolerant JSON extraction, updates supported models and deps, and includes docs and tests.

Changes

Cohort / File(s) Summary
Provider implementation & exports
src/ai-providers/chatgpt-oauth.js, src/ai-providers/index.js, src/constants/providers.js, package.json, scripts/modules/supported-models.json
New ChatGPTOAuthProvider (lazy SDK import, no required API key, JSON-object generation/validation, reasoning controls). Export added; CUSTOM_PROVIDERS.CHATGPT_OAUTH added; dependency ai-sdk-provider-chatgpt-oauth@1.0.0-ai-sdk-v4 added; supported-models gains chatgpt-oauthgpt-5 entry (zero cost).
Unified integration & model resolution
scripts/modules/ai-services-unified.js, scripts/modules/task-manager/models.js, scripts/modules/config-manager.js
Registers chatgpt-oauth in unified services; forwards reasoningEffort/reasoningSummary per-call; setModel supports CHATGPT_OAUTH (removes maxTokens/temperature, injects reasoning defaults); config treats chatgpt-oauth as provider without required API key.
JSON extraction utility & provider usage
src/utils/json-extract.js, src/ai-providers/gemini-cli.js
Adds extractJsonTolerant(text) using jsonc-parser + jsonrepair fallback and boundary-aware extraction; replaces Gemini CLI in-method extraction with centralized tolerant extractor (preserves legacy tail behavior).
CLI & commands
scripts/modules/commands.js
Adds --chatgpt-oauth flag to models CLI, updates mutual-exclusion checks and provider-hint resolution, updates examples/help, and adjusts interactive setup messaging and summaries.
Docs & changelog
.changeset/add-chatgpt-oauth-provider.md, README.md, docs/examples/chatgpt-oauth-usage.md
Adds changeset; README updated to list ChatGPT OAuth as a no-key option; new example doc describes auth, usage, reasoning controls, constraints, and telemetry notes.
Tests
tests/unit/ai-services-unified.test.js, tests/unit/utils/json-extract.test.js
Adds mocked ChatGPTOAuthProvider to unified services tests (no-key behavior + reasoning params) and comprehensive unit tests for extractJsonTolerant.
Misc (CLI messages / formatting)
README.md, minor docs edits
Minor formatting and doc-link insertions for ChatGPT OAuth setup references.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant TM as Task Manager
  participant US as Unified Services
  participant P as ChatGPTOAuthProvider
  participant SDK as ChatGPT OAuth SDK

  rect rgb(245,248,255)
    note over TM: setModel(providerHint=CHATGPT_OAUTH)\nremove maxTokens/temperature\nset reasoning defaults
    TM->>US: generateObject(modelId=gpt-5, provider=chatgpt-oauth, schema, opts)
  end

  US->>P: generateObject(messages, schema, {reasoningEffort, reasoningSummary})
  rect rgb(244,252,244)
    note over P: Preprocess messages (demote system→user)\nprepend JSON-enforcement prompt\nvalidate inputs
    P->>SDK: getClient({baseURL?, reasoningEffort?, reasoningSummary?})
    SDK-->>P: provider instance
    P->>SDK: generateText(messages)
    SDK-->>P: text
    P->>P: extractJsonTolerant(text) → parse + validate schema
    P-->>US: { object, usage }
  end

  US-->>TM: Result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Crunchyman-ralph
  • eyaltoledano

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
package.json (1)

13-27: Add missing test scripts to comply with repo guidelines.

Per coding guidelines, package.json should expose test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci. You’re missing test:unit, test:integration, and test:ci.

Apply:

   "scripts": {
     "test": "node --experimental-vm-modules node_modules/.bin/jest",
     "test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
     "test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
     "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
+    "test:unit": "node --experimental-vm-modules node_modules/.bin/jest --selectProjects unit || node --experimental-vm-modules node_modules/.bin/jest --testPathPattern=tests/unit",
+    "test:integration": "node --experimental-vm-modules node_modules/.bin/jest --selectProjects integration || node --experimental-vm-modules node_modules/.bin/jest --testPathPattern=tests/integration",
     "test:e2e": "./tests/e2e/run_e2e.sh",
     "test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
+    "test:ci": "npm run test:coverage --silent && npm run format-check --silent",
     "prepare": "chmod +x bin/task-master.js mcp-server/server.js",
     "changeset": "changeset",
     "release": "changeset publish",
     "inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
     "mcp-server": "node mcp-server/server.js",
     "format-check": "biome format .",
     "format": "biome format . --write"
   },
src/utils/json-extract.js (1)

15-111: Add unit tests for extractor edge cases (utils should target ≥90% coverage).

Cover at least:

  • Fenced blocks: json, jsonc, json5, js
  • Prefixed prose (“Here’s the JSON: …”)
  • Assignments: const/let/var x = {...};
  • Trailing text after valid JSON
  • Strings with escaped quotes and backslashes
  • JSONC comments with braces inside comments
  • Very large JSON near the scan limit
  • Arrays and objects starting earliest in the string
  • Fallback jsonrepair path on malformed inputs

I can scaffold tests under tests/unit/utils/json-extract.test.js. Want me to push a test file?

tests/unit/ai-services-unified.test.js (2)

125-126: Update mocked no-key providers to include ‘chatgpt-oauth’.

To align tests with the new provider behavior, include it in providersWithoutApiKeys. This prevents false negatives if you add tests for ChatGPT OAuth.

Apply:

-  // Providers without API keys
-  providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli']
+  // Providers without API keys
+  providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli', 'chatgpt-oauth']

296-301: Missing unit tests for the new provider behavior.

Please add focused tests that verify:

  • No API-key check is performed when main provider is chatgpt-oauth.
  • Unsupported params (maxTokens, temperature) are excluded for this provider.
  • Provider-specific reasoning options (reasoningEffort, reasoningSummary) are forwarded.
  • Telemetry cost is 0 for this provider’s model (if your model map includes it in tests).

Below is a minimal test you can add under the same describe('generateTextService') block to exercise the “no API key” behavior; adapt as needed:

test('uses ChatGPT OAuth without API key check and succeeds', async () => {
  // Arrange provider and model
  mockGetMainProvider.mockReturnValue('chatgpt-oauth');
  mockGetMainModelId.mockReturnValue('gpt-5');

  // Even if API keys are "missing", this provider should still be used
  mockIsApiKeySet.mockReturnValue(false);

  // Return a simple success from the provider
  const { ChatGPTOAuthProvider } = await import('../../src/ai-providers/index.js');
  const providerInstance = ChatGPTOAuthProvider.mock.results[0]?.value; // first constructed instance
  if (providerInstance) {
    providerInstance.generateText.mockResolvedValue({
      text: 'OAuth main response',
      usage: { inputTokens: 12, outputTokens: 18, totalTokens: 30 }
    });
  }

  const params = { role: 'main', prompt: 'Hello via OAuth' };
  const result = await generateTextService(params);

  expect(result.mainResult).toBe('OAuth main response');
  // Ensure we attempted to call the OAuth provider
  expect(providerInstance.generateText).toHaveBeenCalledTimes(1);
});

If you prefer explicit access to the mock instance, return a named instance from the unstable mock (i.e., const mockChatGPTOAuthProvider = { ... } and ChatGPTOAuthProvider: jest.fn(() => mockChatGPTOAuthProvider)), then reference it directly in assertions.

docs/examples/chatgpt-oauth-usage.md (1)

86-92: Minor copy edits; call out subscription-cost semantics.

Clarify “cost is 0” means billed via subscription (not metered per token).

Apply:

-- Telemetry reports token counts; cost is shown as 0 since this uses a subscription.
+- Telemetry reports token counts; cost is shown as 0 because usage is covered by your ChatGPT subscription (not per‑token billing).
scripts/modules/config-manager.js (2)

733-783: Ensure MCP API-key status also treats ChatGPT OAuth (and Gemini CLI) as no-key

getMcpApiKeyStatus() returns true early for ollama and claude-code, but not for chatgpt-oauth or gemini-cli. This causes misleading “missing key” status in CLI/MCP reports for providers that don’t use API keys. Add explicit early-true cases.

Apply this diff inside the switch to align with isApiKeySet():

       case 'ollama':
         return true; // No key needed
       case 'claude-code':
         return true; // No key needed
+      case 'gemini-cli':
+        return true; // No key needed
+      case 'chatgpt-oauth':
+        return true; // No key needed
       case 'mistral':
         apiKeyToCheck = mcpEnv.MISTRAL_API_KEY;
         placeholderValue = 'YOUR_MISTRAL_API_KEY_HERE';

646-652: DRY: reuse the exported providersWithoutApiKeys constant

providersWithoutApiKeys is defined here and also exported later. Consider consolidating to a single top-level/exported constant and reusing it in isApiKeySet() to avoid drift.

Example change:

-  const providersWithoutApiKeys = [
-    CUSTOM_PROVIDERS.OLLAMA,
-    CUSTOM_PROVIDERS.BEDROCK,
-    CUSTOM_PROVIDERS.MCP,
-    CUSTOM_PROVIDERS.GEMINI_CLI,
-    CUSTOM_PROVIDERS.CHATGPT_OAUTH
-  ];
+  // Reuse the shared exported list to avoid duplication
+  const providersWithoutApiKeysLocal = providersWithoutApiKeys;

And update the includes() check accordingly.

scripts/modules/ai-services-unified.js (1)

640-655: Reasoning options are never forwarded (roleConfig lacks these fields) + exclude unsupported token params for ChatGPT OAuth

roleConfig only contains { provider, modelId } from _getRoleConfiguration(), so roleConfig?.reasoningEffort/… are always undefined. Also, we should not pass maxTokens/temperature to chatgpt-oauth (backend ignores or may reject them). Fetch the full role config and gate token params per provider.

Apply this diff:

@@
-      const callParams = {
-        apiKey,
-        modelId,
-        maxTokens: roleParams.maxTokens,
-        temperature: roleParams.temperature,
-        messages,
-        ...(baseURL && { baseURL }),
-        // Pass through optional provider-specific options for chatgpt-oauth
-        // If present in roleConfig, they will be forwarded to the provider's getClient.
-        ...(roleConfig?.reasoningEffort !== undefined && {
-          reasoningEffort: roleConfig.reasoningEffort
-        }),
-        ...(roleConfig?.reasoningSummary !== undefined && {
-          reasoningSummary: roleConfig.reasoningSummary
-        }),
+      // Pull the full role config for provider-specific extras (e.g., reasoning controls)
+      const fullRoleCfg = getModelConfigForRole(currentRole, effectiveProjectRoot);
+      const includeTokenParams = providerName?.toLowerCase() !== 'chatgpt-oauth';
+
+      const callParams = {
+        apiKey,
+        modelId,
+        ...(includeTokenParams && {
+          maxTokens: roleParams.maxTokens,
+          temperature: roleParams.temperature,
+        }),
+        messages,
+        ...(baseURL && { baseURL }),
+        // Forward optional reasoning controls when configured
+        ...(fullRoleCfg?.reasoningEffort !== undefined && {
+          reasoningEffort: fullRoleCfg.reasoningEffort
+        }),
+        ...(fullRoleCfg?.reasoningSummary !== undefined && {
+          reasoningSummary: fullRoleCfg.reasoningSummary
+        }),
         ...((serviceType === 'generateObject' ||
           serviceType === 'streamObject') && { schema, objectName }),
         ...providerSpecificParams,
         ...restApiParams
       };

And add the missing import:

@@
   getVertexProjectId,
   isApiKeySet,
-  providersWithoutApiKeys
+  providersWithoutApiKeys,
+  getModelConfigForRole
 } from './config-manager.js';
scripts/modules/commands.js (1)

3785-3797: Mutual-exclusion check misses --azure and --vertex flags

Users can currently pass --azure or --vertex along with --chatgpt-oauth without tripping the exclusivity guard. Include those flags in the conflict set for consistency with the mapping logic below.

Apply this diff:

-      const providerFlags = [
-        options.openrouter,
-        options.ollama,
-        options.bedrock,
-        options.claudeCode,
-        options.geminiCli,
-        options.chatgptOauth
-      ].filter(Boolean).length;
+      const providerFlags = [
+        options.openrouter,
+        options.ollama,
+        options.bedrock,
+        options.claudeCode,
+        options.azure,
+        options.vertex,
+        options.geminiCli,
+        options.chatgptOauth
+      ].filter(Boolean).length;

And update the error message accordingly:

- 'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --gemini-cli, --chatgpt-oauth) simultaneously.'
+ 'Error: Cannot use multiple provider flags (--openrouter, --ollama, --bedrock, --claude-code, --azure, --vertex, --gemini-cli, --chatgpt-oauth) simultaneously.'
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between eabae00 and a0a1002.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (15)
  • .changeset/add-chatgpt-oauth-provider.md (1 hunks)
  • README.md (2 hunks)
  • docs/examples/chatgpt-oauth-usage.md (1 hunks)
  • package.json (1 hunks)
  • scripts/modules/ai-services-unified.js (3 hunks)
  • scripts/modules/commands.js (6 hunks)
  • scripts/modules/config-manager.js (2 hunks)
  • scripts/modules/supported-models.json (1 hunks)
  • scripts/modules/task-manager/models.js (2 hunks)
  • src/ai-providers/chatgpt-oauth.js (1 hunks)
  • src/ai-providers/gemini-cli.js (2 hunks)
  • src/ai-providers/index.js (1 hunks)
  • src/constants/providers.js (1 hunks)
  • src/utils/json-extract.js (1 hunks)
  • tests/unit/ai-services-unified.test.js (1 hunks)
🧰 Additional context used
📓 Path-based instructions (26)
tests/{unit,integration,e2e,fixtures}/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.

Files:

  • tests/unit/ai-services-unified.test.js
tests/unit/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Each module should have a corresponding unit test file in tests/unit/ that reflects the module structure (one test file per module).

Files:

  • tests/unit/ai-services-unified.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)

**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)

Files:

  • tests/unit/ai-services-unified.test.js
**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...

Files:

  • tests/unit/ai-services-unified.test.js
tests/unit/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.

Files:

  • tests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**/*.test.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.

Files:

  • tests/unit/ai-services-unified.test.js
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • tests/unit/ai-services-unified.test.js
  • src/constants/providers.js
  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
  • scripts/modules/task-manager/models.js
  • scripts/modules/commands.js
  • src/utils/json-extract.js
  • src/ai-providers/gemini-cli.js
  • src/ai-providers/chatgpt-oauth.js
  • scripts/modules/config-manager.js
**/*.{test,spec}.*

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test files should follow naming conventions: .test., .spec., or _test. depending on the language

Files:

  • tests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible

Files:

  • tests/unit/ai-services-unified.test.js
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/index.js
  • src/ai-providers/gemini-cli.js
  • src/ai-providers/chatgpt-oauth.js
scripts/modules/ai-services-unified.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.

scripts/modules/ai-services-unified.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).

The telemetryData object returned by ai-services-unified.js must include the fields: timestamp, userId, commandName, modelUsed, providerName, inputTokens, outputTokens, totalTokens, totalCost, and currency.

Files:

  • scripts/modules/ai-services-unified.js
scripts/modules/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).

scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...

Files:

  • scripts/modules/ai-services-unified.js
  • scripts/modules/commands.js
  • scripts/modules/config-manager.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/ai-services-unified.js
  • scripts/modules/task-manager/models.js
  • scripts/modules/commands.js
  • scripts/modules/supported-models.json
  • scripts/modules/config-manager.js
scripts/modules/ai-services*.js

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Ensure AI calls correctly handle and propagate telemetryData as described in 'telemetry.mdc'.

Files:

  • scripts/modules/ai-services-unified.js
scripts/modules/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag

Files:

  • scripts/modules/ai-services-unified.js
  • scripts/modules/commands.js
  • scripts/modules/supported-models.json
  • scripts/modules/config-manager.js
.changeset/*.md

📄 CodeRabbit inference engine (.cursor/rules/changeset.mdc)

.changeset/*.md: When running npm run changeset or npx changeset add, provide a concise summary of the changes for the CHANGELOG.md in imperative mood, typically a single line, and not a detailed Git commit message.
The changeset summary should be user-facing, describing what changed in the released version that is relevant to users or consumers of the package.
Do not use your detailed Git commit message body as the changeset summary.

Files:

  • .changeset/add-chatgpt-oauth-provider.md
.changeset/*

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Create appropriate changesets for new features, use semantic versioning, include tagged system information in release notes, and document breaking changes if any.

Files:

  • .changeset/add-chatgpt-oauth-provider.md
scripts/modules/task-manager/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/task-manager/*.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.

Files:

  • scripts/modules/task-manager/models.js
scripts/modules/task-manager/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)

Files:

  • scripts/modules/task-manager/models.js
scripts/modules/task-manager/**/*.js

📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)

scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.

Do not call AI-specific getters (like getMainModelId, getMainMaxTokens) from core logic functions in scripts/modules/task-manager/*; instead, pass the role to the unified AI service.

Files:

  • scripts/modules/task-manager/models.js
docs/**/*

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.

Files:

  • docs/examples/chatgpt-oauth-usage.md
scripts/modules/commands.js

📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)

scripts/modules/commands.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).
Determine the appropriate role (main, research, fallback) in your core logic and pass it to the service.
Pass the session object (received in the context parameter, especially from direct function wrappers) to the service call when in MCP context.
Use generateTextService and implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, as generateObjectService has shown unreliability with some providers/schemas.
Be aware of potential reliability issues with generateObjectService across different providers and complex schemas. Prefer generateTextService + manual parsing as a more robust alternative for structured data needs.

scripts/modules/commands.js: All new user-facing commands should be added to 'scripts/modules/commands.js'.
Use consistent patterns for option naming and help text in CLI commands.
Follow the Commander.js model for subcommand structure in CLI commands.
When using callbacks (like in Commander.js commands), define them separately to allow testing the callback logic independently.
Add help text to the command definition and update 'dev_workflow.mdc' with command reference when adding a new feature.
Follow the established pattern in 'commands.js' for CLI command implementation, using Commander.js for argument parsing, including comprehensive help text and examples, and supporting tagged task context awareness.
Provide clear error messages for common failu...

Files:

  • scripts/modules/commands.js
{src/utils/**,src/middleware/**}

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Test coverage for all code should meet or exceed 80% lines/functions and 70% branches globally; critical code (utils, middleware) should meet higher thresholds (90% utils, 85% middleware)

Files:

  • src/utils/json-extract.js
package.json

📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)

Add and update test scripts in package.json to include test, test:watch, test:coverage, test:unit, test:integration, test:e2e, and test:ci

Files:

  • package.json
scripts/modules/supported-models.json

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Files:

  • scripts/modules/supported-models.json
scripts/modules/config-manager.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

scripts/modules/config-manager.js: Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.

scripts/modules/config-manager.js: Import and use specific getters from scripts/modules/config-manager.js to access configuration values needed for application logic; pass the explicitRoot parameter to getters if calling from MCP direct functions.
Use isApiKeySet(providerName, session) from config-manager.js to check if a provider's key is available before attempting an AI call.
Handle potential ConfigurationError if the .taskmasterconfig file is missing or invalid when accessed via getConfig.

Files:

  • scripts/modules/config-manager.js
🧠 Learnings (28)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.

Applied to files:

  • tests/unit/ai-services-unified.test.js
  • src/ai-providers/chatgpt-oauth.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.

Applied to files:

  • tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • tests/unit/ai-services-unified.test.js
  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
  • package.json
  • src/ai-providers/gemini-cli.js
  • src/ai-providers/chatgpt-oauth.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/ai-services-unified.js : Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.

Applied to files:

  • tests/unit/ai-services-unified.test.js
  • scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.

Applied to files:

  • tests/unit/ai-services-unified.test.js
  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
  • src/ai-providers/gemini-cli.js
  • src/ai-providers/chatgpt-oauth.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.

Applied to files:

  • tests/unit/ai-services-unified.test.js
  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
  • src/ai-providers/chatgpt-oauth.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.

Applied to files:

  • tests/unit/ai-services-unified.test.js
  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
  • src/ai-providers/chatgpt-oauth.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.

Applied to files:

  • tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Applies to src/ai-providers/*.js : Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Applied to files:

  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
  • src/ai-providers/chatgpt-oauth.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.

Applied to files:

  • scripts/modules/ai-services-unified.js
  • scripts/modules/task-manager/models.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.

Applied to files:

  • scripts/modules/task-manager/models.js
  • scripts/modules/supported-models.json
  • scripts/modules/config-manager.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.

Applied to files:

  • scripts/modules/task-manager/models.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.

Applied to files:

  • scripts/modules/task-manager/models.js
  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Applied to files:

  • scripts/modules/task-manager/models.js
  • scripts/modules/supported-models.json
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.

Applied to files:

  • scripts/modules/task-manager/models.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Update relevant documentation files (such as README.md) to mention the new provider and its configuration.

Applied to files:

  • docs/examples/chatgpt-oauth-usage.md
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/commands.js : Use consistent patterns for option naming and help text in CLI commands.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .taskmaster/config.json : All other Taskmaster settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in .taskmaster/config.json via the task-master models command or models MCP tool.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to .taskmasterconfig : Use `.taskmasterconfig` (JSON) in the project root for storing Taskmaster configuration (excluding API keys), and manage it via the `task-master models --setup` CLI command or the `models` MCP tool.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:19:27.365Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-07-18T17:19:27.365Z
Learning: Use `task-master add-task` to add a new task to tasks.json using AI.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: Applies to .taskmaster/config.json : Store Taskmaster configuration settings (AI model selections, parameters, logging level, default subtasks/priority, project name, tag management) in `.taskmaster/config.json` in the project root. Do not configure these via environment variables.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: Applies to .taskmaster/config.json : Store Taskmaster configuration settings (AI model selections, parameters, logging level, default subtasks/priority, project name, etc.) in the `.taskmaster/config.json` file located in the project root directory. Do not configure non-API key settings via environment variables.

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).

Applied to files:

  • scripts/modules/commands.js
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to {.env,.cursor/mcp.json} : Set API keys for AI providers (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.) in your .env file in the project root (for CLI use) or within the env section of your .cursor/mcp.json file (for MCP/Cursor integration).

Applied to files:

  • README.md
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.

Applied to files:

  • scripts/modules/config-manager.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/config-manager.js : Use `isApiKeySet(providerName, session)` from `config-manager.js` to check if a provider's key is available before attempting an AI call.

Applied to files:

  • scripts/modules/config-manager.js
🧬 Code graph analysis (6)
scripts/modules/ai-services-unified.js (2)
src/ai-providers/chatgpt-oauth.js (1)
  • ChatGPTOAuthProvider (31-94)
scripts/modules/config-manager.js (3)
  • roleConfig (397-397)
  • roleConfig (553-553)
  • roleConfig (940-940)
scripts/modules/task-manager/models.js (3)
mcp-server/src/core/direct-functions/models.js (1)
  • providerHint (41-41)
src/constants/providers.js (2)
  • CUSTOM_PROVIDERS (18-28)
  • CUSTOM_PROVIDERS (18-28)
scripts/modules/update-config-tokens.js (2)
  • modelId (36-36)
  • modelData (40-42)
scripts/modules/commands.js (1)
scripts/modules/task-manager/models.js (4)
  • options (149-149)
  • options (277-277)
  • options (368-368)
  • options (671-671)
src/ai-providers/gemini-cli.js (1)
src/utils/json-extract.js (5)
  • extractJsonTolerant (15-110)
  • firstObj (32-32)
  • firstArr (33-33)
  • start (36-41)
  • closeChar (67-67)
src/ai-providers/chatgpt-oauth.js (2)
src/ai-providers/base-provider.js (1)
  • BaseAIProvider (16-352)
src/utils/json-extract.js (2)
  • extractJsonTolerant (15-110)
  • parsed (94-94)
scripts/modules/config-manager.js (1)
src/constants/providers.js (2)
  • CUSTOM_PROVIDERS (18-28)
  • CUSTOM_PROVIDERS (18-28)
🪛 markdownlint-cli2 (0.17.2)
.changeset/add-chatgpt-oauth-provider.md

5-5: First line in a file should be a top-level heading

(MD041, first-line-heading, first-line-h1)

docs/examples/chatgpt-oauth-usage.md

62-62: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)

🪛 LanguageTool
docs/examples/chatgpt-oauth-usage.md

[grammar] ~1-~1: Use correct spacing
Context: # ChatGPT OAuth Provider Usage The ChatGPT OAuth provider allows you to...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~3-~3: Use correct spacing
Context: ...o/Teams subscription without an API key. ## Install Install the provider package (A...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~5-~5: Use correct spacing
Context: ...cription without an API key. ## Install Install the provider package (AI SDK v4 ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~7-~7: Use correct spacing
Context: ... the provider package (AI SDK v4 build): bash npm install ai-sdk-provider-chatgpt-oauth@ai-sdk-v4 ## Authenticate Use the Codex CLI to log i...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~13-~13: Use correct spacing
Context: ...gpt-oauth@ai-sdk-v4 ``` ## Authenticate Use the Codex CLI to log in (stores toke...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~15-~15: Use correct spacing
Context: ...(stores tokens at ~/.codex/auth.json): bash npx -y @openai/codex login Alternatively, set environment variables...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~21-~21: Use correct spacing
Context: ...lternatively, set environment variables: - CHATGPT_OAUTH_ACCESS_TOKEN - CHATGPT_OAUTH_ACCOUNT_ID - Optional: CHATGPT_OAUTH_REFRESH_TOKEN ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~25-~25: Use correct spacing
Context: ... CHATGPT_OAUTH_ACCOUNT_ID - Optional: CHATGPT_OAUTH_REFRESH_TOKEN ## Configure Task Master Update `.taskmast...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~27-~27: Use correct spacing
Context: ...REFRESH_TOKEN ## Configure Task Master Update.taskmaster/config.json`: ```js...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~29-~29: Use correct spacing
Context: ...aster Update .taskmaster/config.json: json { "models": { "main": { "provider": "chatgpt-oauth", "modelId": "gpt-5", "reasoningEffort": "medium", "reasoningSummary": "auto" }, "fallback": { "provider": "chatgpt-oauth", "modelId": "gpt-5", "reasoningEffort": "medium", "reasoningSummary": "auto" } } } ### Reasoning Settings The ChatGPT OAuth pr...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~50-~50: Use correct spacing
Context: ... } } } ``` ### Reasoning Settings The ChatGPT OAuth provider supports reas...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~52-~52: Use correct spacing
Context: ...th provider supports reasoning controls: - reasoningEffort: "low" | "medium" | "high" | null (nu...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~54-~54: There might be a mistake here.
Context: ..."high" | null(null disables reasoning) -reasoningSummary: "auto" | "none" | "concise" | "detail...

(QB_NEW_EN)


[grammar] ~55-~55: Use correct spacing
Context: ... "detailed" | null` (null omits summary) These are automatically added with sensi...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~57-~57: Use correct spacing
Context: ...n you select the ChatGPT OAuth provider. ### Unsupported Parameters The ChatGPT back...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~59-~59: Use correct spacing
Context: ...th provider. ### Unsupported Parameters The ChatGPT backend does not support...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~61-~61: Use correct spacing
Context: ...not support these common parameters: - maxTokens - The backend uses its own internal limi...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~62-~62: Use hyphens correctly
Context: ...The backend uses its own internal limits - temperature - The backend uses its own internal sett...

(QB_NEW_EN_OTHER_ERROR_IDS_29)


[grammar] ~63-~63: There might be a mistake here.
Context: ...e backend uses its own internal settings These parameters will be automatically e...

(QB_NEW_EN_OTHER)


[grammar] ~65-~65: Use correct spacing
Context: ...om your config when using ChatGPT OAuth. ### Usage Restrictions GPT-5 via ChatGPT OA...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~67-~67: Use correct spacing
Context: ...g ChatGPT OAuth. ### Usage Restrictions GPT-5 via ChatGPT OAuth does not support...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~69-~69: Use correct spacing
Context: ...nfigure it for main/fallback roles only. Example with custom reasoning settings: ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~71-~71: Use correct spacing
Context: ... Example with custom reasoning settings: jsonc { "models": { "main": { "provider": "chatgpt-oauth", "modelId": "gpt-5", "reasoningEffort": "high", "reasoningSummary": "detailed" } } } ## Notes - No API key is required; the pro...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~86-~86: Use correct spacing
Context: ...y": "detailed" } } } ``` ## Notes - No API key is required; the provider use...

(QB_NEW_EN_OTHER_ERROR_IDS_5)

README.md

[typographical] ~78-~78: To join two clauses or set off examples, consider using an em dash.
Context: ...e Code CLI) - ChatGPT OAuth (no API key required - uses ChatGPT Plus/Pro/Teams subscription) U...

(QB_NEW_EN_DASH_RULE_EM)


[grammar] ~78-~78: Use correct spacing
Context: ...ses ChatGPT Plus/Pro/Teams subscription) Using the research model is optional but...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~279-~279: There might be a mistake here.
Context: ...e - Learn more about Claude Code setup - [Learn more about ChatGPT OAuth setup](do...

(QB_NEW_EN_OTHER)


[grammar] ~280-~280: There might be a mistake here.
Context: ... - Learn more about ChatGPT OAuth setup ## Troubleshooting ### If `task-master ini...

(QB_NEW_EN_OTHER)

🔇 Additional comments (14)
scripts/modules/supported-models.json (1)

109-122: Optional: Add a user-friendly name and ensure provider is surfaced in CLI output

  • It’s helpful to give the new “chatgpt-oauth” entry a name so that in any UI or list it reads clearly, for example:

       "chatgpt-oauth": [
         {
           "id": "gpt-5",
    +      "name": "GPT-5 (ChatGPT OAuth)",
           "swe_score": 0.749,
           "cost_per_1m_tokens": {
             "input": 0,
             "output": 0,
             "currency": "USD"
           },
           "allowed_roles": ["main", "fallback"],
           "max_tokens": 100000,
           "supported": true
         }
       ],
  • Because there’s also an OpenAI “gpt-5” entry, double-check that when users run task-master models (or any model-listing command), the provider is shown alongside the model ID or name. If it isn’t, you might:

    • Add a new field like displayName or variant to distinguish providers, or
    • Update the UI/CLI display code to prefix/suffix the provider name (e.g., “gpt-5 (ChatGPT OAuth)” vs. “gpt-5 (OpenAI)”).

This is an optional UX improvement—please verify the current CLI behavior around model listing and provider disambiguation.

src/ai-providers/gemini-cli.js (2)

12-12: Centralizing JSON extraction — LGTM.

Importing the shared extractor reduces duplication and aligns behavior with other providers.


254-275: Preserve legacy behavior for incomplete structures — good defensive change.

The guard that returns original text if no closing brace/bracket exists after the first opener prevents false positives. No further action.

src/ai-providers/index.js (2)

19-19: Re-export looks correct; name aligns with tests and provider registry.

The named export matches the pattern used for other providers and the unit tests’ mock import path. No issues with the addition.


19-19: Sanity check the provider contract.

Please verify src/ai-providers/chatgpt-oauth.js exports ChatGPTOAuthProvider with methods: generateText, streamText, generateObject, and helper(s) used by ai-services-unified. This keeps parity with other providers and the unified layer.

docs/examples/chatgpt-oauth-usage.md (2)

31-49: JSON example is good; consider calling out provider-specific defaults explicitly.

No functional issue; just a note that defaults are auto-injected. If you want to be explicit in the example, keep as-is. Otherwise, you could omit the reasoning fields to demonstrate auto-defaulting.


1-12: Tighten spacing and headings; keep code fences separated by blank lines.

There are multiple double-space occurrences flagged by linters. Also ensure consistent section spacing.

Apply:

-# ChatGPT OAuth Provider Usage
-
-The ChatGPT OAuth provider allows you to use OpenAI GPT-5 via your ChatGPT Plus/Pro/Teams subscription without an API key.
+# ChatGPT OAuth Provider Usage
+
+The ChatGPT OAuth provider lets you use GPT‑5 via your ChatGPT Plus/Pro/Teams subscription without an API key.
@@
-Install the provider package (AI SDK v4 build):
+Install the provider package (AI SDK v4 build):

Likely an incorrect or invalid review comment.

scripts/modules/config-manager.js (1)

650-652: LGTM: added ChatGPT OAuth to no-API-key providers

Including CUSTOM_PROVIDERS.CHATGPT_OAUTH in providersWithoutApiKeys correctly prevents API key gating for this provider in isApiKeySet().

scripts/modules/ai-services-unified.js (1)

73-75: Provider registration looks good

Registering 'chatgpt-oauth' with a ChatGPTOAuthProvider instance is correct and consistent with the existing pattern.

scripts/modules/commands.js (1)

3754-3757: LGTM: new --chatgpt-oauth flag wired into models command

The flag name, help text, and placement are consistent with existing provider flags.

scripts/modules/task-manager/models.js (2)

542-558: LGTM: providerHint flow for ChatGPT OAuth

Recognizing CUSTOM_PROVIDERS.CHATGPT_OAUTH, validating against available models, and proceeding with a clear warning when not found matches the pattern used for other custom providers.


605-623: LGTM: drop unsupported params and set reasoning defaults for ChatGPT OAuth

Removing maxTokens/temperature for chatgpt-oauth and defaulting reasoningEffort/ReasoningSummary aligns with the backend constraints and intended UX.

src/ai-providers/chatgpt-oauth.js (2)

62-88: Solid client initialization with dynamic SDK import and provider options

Lazy-loading the SDK, supporting baseURL, and passing optional reasoning controls are all correct. The error message clearly points to the required install tag.


102-115: System→user demotion aligns with ChatGPT backend requirements

The demotion logic is simple and safe. Good call to combine multiple system prompts and prepend as a user message.

'task-master-ai': minor
---

feat: add ChatGPT OAuth provider (gpt‑5) via subscription (no API key). Wired into unified services with model `chatgpt-oauth/gpt-5` (main, fallback only), JSON object generation/validation, and optional reasoning controls (effort/summary). Docs included.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Rewrite changeset summary to be user-facing and concise (no “feat:” prefix).

Per the repo guidelines, the summary should be a single, imperative, user-facing line.

Apply:

-feat: add ChatGPT OAuth provider (gpt‑5) via subscription (no API key). Wired into unified services with model `chatgpt-oauth/gpt-5` (main, fallback only), JSON object generation/validation, and optional reasoning controls (effort/summary). Docs included.
+Add ChatGPT OAuth provider to use GPT‑5 via ChatGPT Plus/Pro/Teams subscriptions (no API key); integrate with unified services (main/fallback), enable JSON object generation with validation, and add optional reasoning controls (effort/summary); docs included.

Note: MD041 (“first line should be a heading”) can be ignored for changeset files.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
feat: add ChatGPT OAuth provider (gpt‑5) via subscription (no API key). Wired into unified services with model `chatgpt-oauth/gpt-5` (main, fallback only), JSON object generation/validation, and optional reasoning controls (effort/summary). Docs included.
Add ChatGPT OAuth provider to use GPT-5 via ChatGPT Plus/Pro/Teams subscriptions (no API key); integrate with unified services (main/fallback), enable JSON object generation with validation, and add optional reasoning controls (effort/summary); docs included.
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

5-5: First line in a file should be a top-level heading

(MD041, first-line-heading, first-line-h1)

🤖 Prompt for AI Agents
In .changeset/add-chatgpt-oauth-provider.md around line 5, the changeset summary
currently begins with "feat:" and is not a single, user-facing imperative
sentence; update that first line to a concise, imperative, user-facing summary
(e.g., "Add ChatGPT OAuth provider (gpt-5) via subscription with unified
services integration and optional reasoning controls") without the "feat:"
prefix, keeping it to one line and in imperative tone.

Comment on lines +21 to +27
Alternatively, set environment variables:

- `CHATGPT_OAUTH_ACCESS_TOKEN`
- `CHATGPT_OAUTH_ACCOUNT_ID`
- Optional: `CHATGPT_OAUTH_REFRESH_TOKEN`

## Configure Task Master
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

List formatting and wording polish for env vars.

  • Add a blank line before the list.
  • Clarify that only access token and account ID are required; refresh token is optional.

Apply:

-Alternatively, set environment variables:
-
-- `CHATGPT_OAUTH_ACCESS_TOKEN`
-- `CHATGPT_OAUTH_ACCOUNT_ID`
-- Optional: `CHATGPT_OAUTH_REFRESH_TOKEN`
+Alternatively, set these environment variables:
+
+- `CHATGPT_OAUTH_ACCESS_TOKEN` (required)
+- `CHATGPT_OAUTH_ACCOUNT_ID` (required)
+- `CHATGPT_OAUTH_REFRESH_TOKEN` (optional)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Alternatively, set environment variables:
- `CHATGPT_OAUTH_ACCESS_TOKEN`
- `CHATGPT_OAUTH_ACCOUNT_ID`
- Optional: `CHATGPT_OAUTH_REFRESH_TOKEN`
## Configure Task Master
Alternatively, set these environment variables:
- `CHATGPT_OAUTH_ACCESS_TOKEN` (required)
- `CHATGPT_OAUTH_ACCOUNT_ID` (required)
- `CHATGPT_OAUTH_REFRESH_TOKEN` (optional)
## Configure Task Master
🧰 Tools
🪛 LanguageTool

[grammar] ~21-~21: Use correct spacing
Context: ...lternatively, set environment variables: - CHATGPT_OAUTH_ACCESS_TOKEN - CHATGPT_OAUTH_ACCOUNT_ID - Optional: CHATGPT_OAUTH_REFRESH_TOKEN ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~25-~25: Use correct spacing
Context: ... CHATGPT_OAUTH_ACCOUNT_ID - Optional: CHATGPT_OAUTH_REFRESH_TOKEN ## Configure Task Master Update `.taskmast...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~27-~27: Use correct spacing
Context: ...REFRESH_TOKEN ## Configure Task Master Update.taskmaster/config.json`: ```js...

(QB_NEW_EN_OTHER_ERROR_IDS_5)

🤖 Prompt for AI Agents
In docs/examples/chatgpt-oauth-usage.md around lines 21 to 27, add a blank line
before the environment-variable list and reword the list to clearly state that
CHATGPT_OAUTH_ACCESS_TOKEN and CHATGPT_OAUTH_ACCOUNT_ID are required while
CHATGPT_OAUTH_REFRESH_TOKEN is optional; update the bullet text for clarity and
brevity so it reads like: required: CHATGPT_OAUTH_ACCESS_TOKEN,
CHATGPT_OAUTH_ACCOUNT_ID; optional: CHATGPT_OAUTH_REFRESH_TOKEN.

Comment on lines +54 to +66
- `reasoningEffort`: `"low" | "medium" | "high" | null` (null disables reasoning)
- `reasoningSummary`: `"auto" | "none" | "concise" | "detailed" | null` (null omits summary)

These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.

### Unsupported Parameters

The ChatGPT backend does **not** support these common parameters:
- `maxTokens` - The backend uses its own internal limits
- `temperature` - The backend uses its own internal settings

These parameters will be automatically excluded from your config when using ChatGPT OAuth.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Minor wording/hyphenation in parameter notes.

  • Use en dashes or em dashes consistently.
  • Add a blank line before lists per MD032.

Apply:

-These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.
+These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.

-### Unsupported Parameters
+### Unsupported Parameters
@@
-- `maxTokens` - The backend uses its own internal limits
-- `temperature` - The backend uses its own internal settings
+- `maxTokens` — the backend uses its own internal limits
+- `temperature` — the backend uses its own internal settings
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- `reasoningEffort`: `"low" | "medium" | "high" | null` (null disables reasoning)
- `reasoningSummary`: `"auto" | "none" | "concise" | "detailed" | null` (null omits summary)
These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.
### Unsupported Parameters
The ChatGPT backend does **not** support these common parameters:
- `maxTokens` - The backend uses its own internal limits
- `temperature` - The backend uses its own internal settings
These parameters will be automatically excluded from your config when using ChatGPT OAuth.
- `reasoningEffort`: `"low" | "medium" | "high" | null` (null disables reasoning)
- `reasoningSummary`: `"auto" | "none" | "concise" | "detailed" | null` (null omits summary)
These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.
### Unsupported Parameters
The ChatGPT backend does **not** support these common parameters:
- `maxTokens` — the backend uses its own internal limits
- `temperature` — the backend uses its own internal settings
These parameters will be automatically excluded from your config when using ChatGPT OAuth.
🧰 Tools
🪛 LanguageTool

[grammar] ~54-~54: There might be a mistake here.
Context: ..."high" | null(null disables reasoning) -reasoningSummary: "auto" | "none" | "concise" | "detail...

(QB_NEW_EN)


[grammar] ~55-~55: Use correct spacing
Context: ... "detailed" | null` (null omits summary) These are automatically added with sensi...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~57-~57: Use correct spacing
Context: ...n you select the ChatGPT OAuth provider. ### Unsupported Parameters The ChatGPT back...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~59-~59: Use correct spacing
Context: ...th provider. ### Unsupported Parameters The ChatGPT backend does not support...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~61-~61: Use correct spacing
Context: ...not support these common parameters: - maxTokens - The backend uses its own internal limi...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~62-~62: Use hyphens correctly
Context: ...The backend uses its own internal limits - temperature - The backend uses its own internal sett...

(QB_NEW_EN_OTHER_ERROR_IDS_29)


[grammar] ~63-~63: There might be a mistake here.
Context: ...e backend uses its own internal settings These parameters will be automatically e...

(QB_NEW_EN_OTHER)


[grammar] ~65-~65: Use correct spacing
Context: ...om your config when using ChatGPT OAuth. ### Usage Restrictions GPT-5 via ChatGPT OA...

(QB_NEW_EN_OTHER_ERROR_IDS_5)

🪛 markdownlint-cli2 (0.17.2)

62-62: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)

🤖 Prompt for AI Agents
In docs/examples/chatgpt-oauth-usage.md around lines 54 to 66, the parameter
notes need consistent dash usage and proper list spacing: replace inconsistent
hyphens with a consistent en‑dash or em‑dash style across the two parameter
lines and the “Unsupported Parameters” header items, and insert a blank line
before the bulleted lists (both the parameter list and the “Unsupported
Parameters” list) to satisfy MD032; keep wording the same aside from dash
character normalization and added blank lines.

Comment on lines +67 to +84
### Usage Restrictions

GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.

Example with custom reasoning settings:

```jsonc
{
"models": {
"main": {
"provider": "chatgpt-oauth",
"modelId": "gpt-5",
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
}
}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add a prominent known-limitation note and example clarity.

Document the model ID collision behavior to reduce setup confusion and keep the example narrowly focused.

Apply:

-### Usage Restrictions
+### Usage Restrictions
@@
-GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.
+GPT‑5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.

+> Note (model ID collision):
+> If both OpenAI and ChatGPT OAuth providers expose a model with the same ID (e.g., `"gpt-5"`), the CLI’s `task-master models --set-main gpt-5` may select the OpenAI provider by default. To use ChatGPT OAuth, explicitly set `"provider": "chatgpt-oauth"` in your config or edit the configuration manually.
@@
-Example with custom reasoning settings:
+Example with custom reasoning settings:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Usage Restrictions
GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.
Example with custom reasoning settings:
```jsonc
{
"models": {
"main": {
"provider": "chatgpt-oauth",
"modelId": "gpt-5",
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
}
}
```
### Usage Restrictions
GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.
> Note (model ID collision):
> If both OpenAI and ChatGPT OAuth providers expose a model with the same ID (e.g., `"gpt-5"`), the CLI’s `task-master models --set-main gpt-5` may select the OpenAI provider by default. To use ChatGPT OAuth, explicitly set `"provider": "chatgpt-oauth"` in your config or edit the configuration manually.
Example with custom reasoning settings:
🧰 Tools
🪛 LanguageTool

[grammar] ~67-~67: Use correct spacing
Context: ...g ChatGPT OAuth. ### Usage Restrictions GPT-5 via ChatGPT OAuth does not support...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~69-~69: Use correct spacing
Context: ...nfigure it for main/fallback roles only. Example with custom reasoning settings: ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~71-~71: Use correct spacing
Context: ... Example with custom reasoning settings: jsonc { "models": { "main": { "provider": "chatgpt-oauth", "modelId": "gpt-5", "reasoningEffort": "high", "reasoningSummary": "detailed" } } } ## Notes - No API key is required; the pro...

(QB_NEW_EN_OTHER_ERROR_IDS_5)

🤖 Prompt for AI Agents
In docs/examples/chatgpt-oauth-usage.md around lines 67–84, add a prominent
"Known limitation" note above the "Usage Restrictions" section stating that
modelId values can collide across different providers (causing unexpected
overrides) and recommending unique namespacing (e.g., include provider or org
prefix) or using provider-scoped configuration; then clarify the example by
making it narrowly focused (keep only the "main" role, keep the reasoning fields
as-is) and add a short inline comment that GPT-5 via chatgpt-oauth does not
support live browsing and should be used only for main/fallback roles — ensure
the note explicitly mentions the modelId collision behavior and suggests
best-practice naming to avoid conflicts.

"lru-cache": "^10.2.0",
"ollama-ai-provider": "^1.2.0",
"openai": "^4.89.0",
"ai-sdk-provider-chatgpt-oauth": "1.0.0-ai-sdk-v4",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Pin compatibility between ai and ai-sdk-provider-chatgpt-oauth (guard against version skew).

You’re introducing ai-sdk-provider-chatgpt-oauth@1.0.0-ai-sdk-v4 alongside ai@^4.3.10. Good signal, but please verify the provider’s peer/engine expectations and whether it requires a matching ai minor. If the provider is lazily imported (like gemini-cli), consider moving it to optionalDependencies to avoid installs failing for users who don’t use ChatGPT OAuth.

Run:


🏁 Script executed:

#!/bin/bash
# Inspect peer deps and ensure compatibility
npm view ai-sdk-provider-chatgpt-oauth@1.0.0-ai-sdk-v4 peerDependencies optionalDependencies engines

# Check currently resolved versions in the repo
node -e "const p=require('./package.json');console.log({ai:p.dependencies.ai,'chatgpt-oauth':p.dependencies['ai-sdk-provider-chatgpt-oauth']})"

Length of output: 394


No version pinning required between “ai” and “ai-sdk-provider-chatgpt-oauth”
Verified that ai-sdk-provider-chatgpt-oauth@1.0.0-ai-sdk-v4 does not declare a peerDependency on ai (only on zod@^3.0.0 || ^4.0.0) and requires Node ≥18, which aligns with our setup. Your project already depends on ai@^4.3.10, so there’s no risk of mismatched minor versions or engine incompatibility.

• Location: package.json, line 83
• Current deps:
– ai: ^4.3.10
– ai-sdk-provider-chatgpt-oauth: 1.0.0-ai-sdk-v4

Optional refactor:
• If the ChatGPT OAuth provider is only used via lazy imports at runtime, consider moving it from dependencies into optionalDependencies so that users who don’t invoke ChatGPT OAuth won’t incur an install error if they lack required credentials or environment.

🤖 Prompt for AI Agents
package.json lines 83: The dependency
ai-sdk-provider-chatgpt-oauth@1.0.0-ai-sdk-v4 does not require version pinning
against our ai dependency and is compatible with Node ≥18, so no change is
required to resolve peer/engine concerns; if you prefer to avoid installing this
provider for users who never use ChatGPT OAuth, move the entry from
"dependencies" to "optionalDependencies" (update package.json accordingly and
run npm/yarn install to verify) so the package is only installed when available
at runtime.

Comment on lines +26 to 28
GEMINI_CLI: 'gemini-cli',
CHATGPT_OAUTH: 'chatgpt-oauth'
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Add CUSTOM_PROVIDERS.CHATGPT_OAUTH — LGTM.

Naming aligns with downstream usage. Confirm CLI wiring picks this up where CUSTOM_PROVIDERS_ARRAY is iterated (commands/options help).

Run:


🏁 Script executed:

#!/bin/bash
rg -nP "CHATGPT_OAUTH" -C2
rg -nP "\bPROVIDERS\b.*chatgpt-oauth" scripts/modules/ai-services-unified.js -C2 || true

Length of output: 3546


Expose CHATGPT_OAUTH in the AI Services Unified CLI providers

The CHATGPT_OAUTH entry is defined in src/constants/providers.js and wired through the config manager, task-manager, docs, and tests, but it’s not yet included in the PROVIDERS array used by the unified CLI (scripts/modules/ai-services-unified.js). Without this, commands like --help or list‐providers won’t show chatgpt-oauth.

• File to update:
scripts/modules/ai-services-unified.js
Locate the PROVIDERS (or CUSTOM_PROVIDERS_ARRAY) definition and append CUSTOM_PROVIDERS.CHATGPT_OAUTH.

• Example diff:

 const PROVIDERS = [
   CUSTOM_PROVIDERS.GEMINI_CLI,
   CUSTOM_PROVIDERS.MCP,
+  CUSTOM_PROVIDERS.CHATGPT_OAUTH
 ];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
GEMINI_CLI: 'gemini-cli',
CHATGPT_OAUTH: 'chatgpt-oauth'
};
// scripts/modules/ai-services-unified.js
const PROVIDERS = [
CUSTOM_PROVIDERS.GEMINI_CLI,
CUSTOM_PROVIDERS.MCP,
CUSTOM_PROVIDERS.CHATGPT_OAUTH
];
🤖 Prompt for AI Agents
In scripts/modules/ai-services-unified.js (update the PROVIDERS /
CUSTOM_PROVIDERS_ARRAY definition) and note that CHATGPT_OAUTH is defined in
src/constants/providers.js lines 26-28, append CUSTOM_PROVIDERS.CHATGPT_OAUTH to
the PROVIDERS (or CUSTOM_PROVIDERS_ARRAY) so the unified CLI exposes the
chatgpt-oauth provider; ensure the value is added alongside the other provider
entries and update any related help/list output generation to include the new
array entry.

Comment on lines +4 to +6
import { parse as parseJsonc } from 'jsonc-parser';
import { jsonrepair } from 'jsonrepair';

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Promote the scan limit to a top-level constant for clarity and tuning.

Minor readability/maintainability improvement.

Apply:

 import { parse as parseJsonc } from 'jsonc-parser';
 import { jsonrepair } from 'jsonrepair';
 
+// Maximum characters to scan during boundary detection.
+// Keep conservative to balance performance and robustness.
+const DEFAULT_SCAN_LIMIT = 200_000;

And update the loop header accordingly:

-  const SCAN_LIMIT = Math.min(content.length, 200_000);
+  const SCAN_LIMIT = Math.min(content.length, DEFAULT_SCAN_LIMIT);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { parse as parseJsonc } from 'jsonc-parser';
import { jsonrepair } from 'jsonrepair';
import { parse as parseJsonc } from 'jsonc-parser';
import { jsonrepair } from 'jsonrepair';
// Maximum characters to scan during boundary detection.
// Keep conservative to balance performance and robustness.
const DEFAULT_SCAN_LIMIT = 200_000;
// …rest of the module…
function extractJson(content) {
// …other logic…
const SCAN_LIMIT = Math.min(content.length, DEFAULT_SCAN_LIMIT);
// …continue processing…
}
🤖 Prompt for AI Agents
In src/utils/json-extract.js around lines 4 to 6, the scan limit used inside the
loop should be promoted to a top-level constant for clarity and easier tuning;
declare a descriptive constant (e.g. const SCAN_LIMIT = <current numeric value
or a sensible default> ) near the imports and replace the hardcoded numeric
limit in the loop header with that constant so the loop reads something like for
(let i = 0; i < SCAN_LIMIT && ...); ensure you export or keep the constant
file-local depending on usage and update any nearby comments to reference
SCAN_LIMIT.

Comment on lines +21 to +29
// Strip common wrappers in a single pass
content = content
// Remove markdown fences
.replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
// Remove variable assignments like: const x = {...};
.replace(/^\s*(?:const|let|var)\s+\w+\s*=\s*([\s\S]*?)(?:;|\s*)$/i, '$1')
// Remove common prefixes
.replace(/^(?:Here's|The)\s+(?:the\s+)?JSON.*?[:]\s*/i, '')
.trim();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Broaden code fence unwrapping to common JSON-ish labels.

Support jsonc/json5/js/javascript/ts/typescript fences; this improves extraction on diverse outputs.

Apply:

-    // Remove markdown fences
-    .replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
+    // Remove markdown fences (json/jsonc/json5/js/javascript/ts/typescript)
+    .replace(
+      /^.*?```(?:jsonc?|json5|js|javascript|ts|typescript)?\s*([\s\S]*?)\s*```.*$/i,
+      '$1'
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Strip common wrappers in a single pass
content = content
// Remove markdown fences
.replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
// Remove variable assignments like: const x = {...};
.replace(/^\s*(?:const|let|var)\s+\w+\s*=\s*([\s\S]*?)(?:;|\s*)$/i, '$1')
// Remove common prefixes
.replace(/^(?:Here's|The)\s+(?:the\s+)?JSON.*?[:]\s*/i, '')
.trim();
// Strip common wrappers in a single pass
content = content
// Remove markdown fences (json/jsonc/json5/js/javascript/ts/typescript)
.replace(
/^.*?

Comment on lines +65 to +99
// Single-pass boundary detection
const openChar = content[0];
const closeChar = openChar === '{' ? '}' : ']';
let depth = 0;
let inString = false;
let escapeNext = false;
let lastValidEnd = -1;

for (let i = 0; i < content.length && i < 10000; i++) {
const ch = content[i];
if (escapeNext) {
escapeNext = false;
continue;
}
if (ch === '\\') {
escapeNext = true;
continue;
}
if (ch === '"') {
inString = !inString;
continue;
}
if (inString) continue;
if (ch === openChar) depth++;
else if (ch === closeChar) {
depth--;
if (depth === 0) {
lastValidEnd = i + 1;
const candidate = content.slice(0, lastValidEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) return parsed;
}
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make boundary detection JSONC-aware (skip // and / / comments) and lift the scan limit.

Current scan ignores string escapes (good) but not comments. Braces in comments can corrupt depth accounting. Also, 10k chars can be too tight for larger generations.

Apply:

-  // Single-pass boundary detection
-  const openChar = content[0];
-  const closeChar = openChar === '{' ? '}' : ']';
-  let depth = 0;
-  let inString = false;
-  let escapeNext = false;
-  let lastValidEnd = -1;
-
-  for (let i = 0; i < content.length && i < 10000; i++) {
-    const ch = content[i];
+  // Single-pass boundary detection
+  const openChar = content[0];
+  const closeChar = openChar === '{' ? '}' : ']';
+  let depth = 0;
+  let inString = false;
+  let escapeNext = false;
+  let inLineComment = false;
+  let inBlockComment = false;
+  let lastValidEnd = -1;
+
+  // Allow larger JSON payloads; still bounded for safety
+  const SCAN_LIMIT = Math.min(content.length, 200_000);
+
+  for (let i = 0; i < SCAN_LIMIT; i++) {
+    const ch = content[i];
+    const next = content[i + 1];
+
+    // Handle line/block comments when not inside strings
+    if (!inString) {
+      if (inLineComment) {
+        if (ch === '\n') inLineComment = false;
+        continue;
+      }
+      if (inBlockComment) {
+        if (ch === '*' && next === '/') {
+          inBlockComment = false;
+          i++; // skip '/'
+        }
+        continue;
+      }
+      if (ch === '/' && next === '/') {
+        inLineComment = true;
+        i++; // skip second '/'
+        continue;
+      }
+      if (ch === '/' && next === '*') {
+        inBlockComment = true;
+        i++; // skip '*'
+        continue;
+      }
+    }
+
     if (escapeNext) {
       escapeNext = false;
       continue;
     }
     if (ch === '\\') {
       escapeNext = true;
       continue;
     }
     if (ch === '"') {
       inString = !inString;
       continue;
     }
     if (inString) continue;
     if (ch === openChar) depth++;
     else if (ch === closeChar) {
       depth--;
       if (depth === 0) {
         lastValidEnd = i + 1;
         const candidate = content.slice(0, lastValidEnd);
         const parsed = tryParse(candidate);
         if (parsed !== undefined) return parsed;
       }
     }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Single-pass boundary detection
const openChar = content[0];
const closeChar = openChar === '{' ? '}' : ']';
let depth = 0;
let inString = false;
let escapeNext = false;
let lastValidEnd = -1;
for (let i = 0; i < content.length && i < 10000; i++) {
const ch = content[i];
if (escapeNext) {
escapeNext = false;
continue;
}
if (ch === '\\') {
escapeNext = true;
continue;
}
if (ch === '"') {
inString = !inString;
continue;
}
if (inString) continue;
if (ch === openChar) depth++;
else if (ch === closeChar) {
depth--;
if (depth === 0) {
lastValidEnd = i + 1;
const candidate = content.slice(0, lastValidEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) return parsed;
}
}
}
// Single-pass boundary detection
const openChar = content[0];
const closeChar = openChar === '{' ? '}' : ']';
let depth = 0;
let inString = false;
let escapeNext = false;
let inLineComment = false;
let inBlockComment = false;
let lastValidEnd = -1;
// Allow larger JSON payloads; still bounded for safety
const SCAN_LIMIT = Math.min(content.length, 200_000);
for (let i = 0; i < SCAN_LIMIT; i++) {
const ch = content[i];
const next = content[i + 1];
// Handle line/block comments when not inside strings
if (!inString) {
if (inLineComment) {
if (ch === '\n') inLineComment = false;
continue;
}
if (inBlockComment) {
if (ch === '*' && next === '/') {
inBlockComment = false;
i++; // skip '/'
}
continue;
}
if (ch === '/' && next === '/') {
inLineComment = true;
i++; // skip second '/'
continue;
}
if (ch === '/' && next === '*') {
inBlockComment = true;
i++; // skip '*'
continue;
}
}
if (escapeNext) {
escapeNext = false;
continue;
}
if (ch === '\\') {
escapeNext = true;
continue;
}
if (ch === '"') {
inString = !inString;
continue;
}
if (inString) continue;
if (ch === openChar) depth++;
else if (ch === closeChar) {
depth--;
if (depth === 0) {
lastValidEnd = i + 1;
const candidate = content.slice(0, lastValidEnd);
const parsed = tryParse(candidate);
if (parsed !== undefined) return parsed;
}
}
}
🤖 Prompt for AI Agents
In src/utils/json-extract.js around lines 65 to 99, the single-pass JSON
boundary detection currently treats strings correctly but does not account for
JSONC comments and artificially limits the scan to 10000 chars; update the loop
to also detect and skip single-line (//...) and multi-line (/* ... */) comments
so braces inside comments do not affect depth counting, ensure the inString
toggle is not flipped while inside a comment, and remove or raise the
10000-character cap to scan the full content length (e.g., iterate to
content.length) so larger inputs are handled.

Comment on lines +230 to 236
ChatGPTOAuthProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn(),
getRequiredApiKeyName: jest.fn(() => 'CHATGPT_OAUTH_ACCESS_TOKEN'),
isRequiredApiKey: jest.fn(() => false)
}))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Make the mock’s API-key API consistent with “no key required”.

Return null for the key name to match the convention used for other no-key providers (e.g., Ollama). This avoids accidental lookups by name in future assertions.

Apply:

 ChatGPTOAuthProvider: jest.fn(() => ({
   generateText: jest.fn(),
   streamText: jest.fn(),
   generateObject: jest.fn(),
-  getRequiredApiKeyName: jest.fn(() => 'CHATGPT_OAUTH_ACCESS_TOKEN'),
+  getRequiredApiKeyName: jest.fn(() => null),
   isRequiredApiKey: jest.fn(() => false)
 }))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ChatGPTOAuthProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn(),
getRequiredApiKeyName: jest.fn(() => 'CHATGPT_OAUTH_ACCESS_TOKEN'),
isRequiredApiKey: jest.fn(() => false)
}))
ChatGPTOAuthProvider: jest.fn(() => ({
generateText: jest.fn(),
streamText: jest.fn(),
generateObject: jest.fn(),
getRequiredApiKeyName: jest.fn(() => null),
isRequiredApiKey: jest.fn(() => false)
}))
🤖 Prompt for AI Agents
In tests/unit/ai-services-unified.test.js around lines 230 to 236, the
ChatGPTOAuthProvider mock currently returns a string key name while also
indicating no key is required; change the mock so getRequiredApiKeyName returns
null (to match the convention for no-key providers like Ollama) while keeping
isRequiredApiKey as false, ensuring future lookups/assertions won't accidentally
try to resolve a named API key.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
package.json (1)

13-27: Add missing test scripts to align with repo guidelines.

Expose unit/integration/CI entry points for consistency.

   "scripts": {
     "test": "node --experimental-vm-modules node_modules/.bin/jest",
     "test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
     "test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
     "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
+    "test:unit": "node --experimental-vm-modules node_modules/.bin/jest --testPathPattern=tests/unit",
+    "test:integration": "node --experimental-vm-modules node_modules/.bin/jest --testPathPattern=tests/integration",
     "test:e2e": "./tests/e2e/run_e2e.sh",
     "test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
+    "test:ci": "node --experimental-vm-modules node_modules/.bin/jest --ci --coverage",
     "prepare": "chmod +x bin/task-master.js mcp-server/server.js",
     "changeset": "changeset",
     "release": "changeset publish",
tests/unit/ai-services-unified.test.js (1)

124-126: Keep the “providersWithoutApiKeys” mock in sync (add chatgpt-oauth).

Your real config marks ChatGPT OAuth as no-key; the test mock must match to avoid brittle failures in future tests that consult this list.

Apply:

-  // Providers without API keys
-  providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli']
+  // Providers without API keys
+  providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli', 'chatgpt-oauth']
scripts/modules/commands.js (2)

3785-3799: Consolidate provider flag resolution to avoid 3x duplication.

The mapping appears three times (main/research/fallback). Extract a small helper to resolve the provider hint from flags once and reuse it; this reduces future drift when adding providers.

Example (minimal sketch; define near the action handler):

function resolveProviderHint(opts) {
  return opts.openrouter ? 'openrouter'
    : opts.ollama ? 'ollama'
    : opts.bedrock ? 'bedrock'
    : opts.claudeCode ? 'claude-code'
    : opts.geminiCli ? 'gemini-cli'
    : opts.chatgptOauth ? 'chatgpt-oauth'
    : undefined;
}

Then replace each ternary chain with providerHint: resolveProviderHint(options).


3830-3845: Verify setModel flows for chatgpt-oauth also inject reasoning defaults.

Given the new provider semantics, ensure downstream setModel logic applies parameter exclusions and reasoning defaults when invoked via the CLI for main/research/fallback. Tests cover this at the module level; consider adding a unit test for models command handler that asserts setModel is called with providerHint 'chatgpt-oauth'.

I can add a focused unit test that stubs setModel and asserts providerHint resolution for --chatgpt-oauth.

Also applies to: 3858-3873, 3888-3903

scripts/modules/task-manager/models.js (1)

678-696: Use the canonical no-key list when reporting API key status.

Currently only ‘ollama’ is skipped. Align with config by excluding all providers in providersWithoutApiKeys (e.g., chatgpt-oauth, gemini-cli).

Apply:

-import {
+import {
   getMainModelId,
   getResearchModelId,
   getFallbackModelId,
   getAvailableModels,
   getMainProvider,
   getResearchProvider,
   getFallbackProvider,
   isApiKeySet,
   getMcpApiKeyStatus,
   getConfig,
   writeConfig,
   isConfigFilePresent,
   getAllProviders,
-  getBaseUrlForRole
+  getBaseUrlForRole,
+  providersWithoutApiKeys
 } from '../config-manager.js';
-    const providers = getAllProviders();
-    const providersToCheck = providers.filter(
-      (p) => p.toLowerCase() !== 'ollama'
-    ); // Ollama is not a provider, it's a service, doesn't need an api key usually
+    const providers = getAllProviders();
+    const noKey = new Set((providersWithoutApiKeys || []).map((p) => p.toLowerCase()));
+    const providersToCheck = providers.filter((p) => !noKey.has(String(p).toLowerCase()));
src/ai-providers/chatgpt-oauth.js (1)

1-184: Module export surface does not match guidelines — add wrapper exports

Per repository guidelines for src/ai-providers/*.js, provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object. This file exports only a class. Add thin wrappers to satisfy the contract without changing existing class usage.

Apply this diff at the end of the file:

+// Convenience wrappers to match src/ai-providers/*.js contract
+export async function generateChatGPTOAuthText(params) {
+  const p = new ChatGPTOAuthProvider();
+  return p.generateText(params);
+}
+
+export async function streamChatGPTOAuthText(params) {
+  const p = new ChatGPTOAuthProvider();
+  return p.streamText(params);
+}
+
+export async function generateChatGPTOAuthObject(params) {
+  const p = new ChatGPTOAuthProvider();
+  return p.generateObject(params);
+}

If you prefer, I can also add a matching unit test skeleton under tests/unit/ai-providers/chatgpt-oauth.test.js that mocks the SDK and asserts these wrappers call paths and error handling.

♻️ Duplicate comments (14)
README.md (2)

279-280: LGTM — list formatting nit fixed and links read cleanly.

The prior spacing/markdownlint concern looks resolved; bullets are properly delimited.


78-81: Clarify “no API key” note and include ChatGPT OAuth in the exception sentence.

Replace the hyphenated parenthetical with an em dash and update the “ONE API key” sentence to reflect ChatGPT OAuth as a no-key path.

- - ChatGPT OAuth (no API key required - uses ChatGPT Plus/Pro/Teams subscription)
+ - ChatGPT OAuth (no API key required — uses ChatGPT Plus/Pro/Teams subscription)

-Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code). Adding all API keys enables you to seamlessly switch between model providers at will.
+Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code or ChatGPT OAuth). Adding all API keys enables you to seamlessly switch between model providers at will.
.changeset/add-chatgpt-oauth-provider.md (1)

5-5: Make the changeset summary user-facing, imperative, and single-line (no “feat:” prefix).

Aligns with repo changeset rules; keeps release notes crisp.

-feat: add ChatGPT OAuth provider (gpt‑5) via subscription (no API key). Wired into unified services with model `chatgpt-oauth/gpt-5` (main, fallback only), JSON object generation/validation, and optional reasoning controls (effort/summary). Docs included.
+Add ChatGPT OAuth provider to use GPT‑5 via ChatGPT Plus/Pro/Teams subscriptions (no API key); integrate with unified services (main/fallback), enable JSON object generation with validation, and add optional reasoning controls (effort/summary); docs included.
docs/examples/chatgpt-oauth-usage.md (3)

21-26: Polish env var list: add blank line and mark required vs optional.

Improves clarity and satisfies MD032.

-Alternatively, set environment variables:
-
-- `CHATGPT_OAUTH_ACCESS_TOKEN`
-- `CHATGPT_OAUTH_ACCOUNT_ID`
-- Optional: `CHATGPT_OAUTH_REFRESH_TOKEN`
+Alternatively, set these environment variables:
+
+- `CHATGPT_OAUTH_ACCESS_TOKEN` (required)
+- `CHATGPT_OAUTH_ACCOUNT_ID` (required)
+- `CHATGPT_OAUTH_REFRESH_TOKEN` (optional)

54-66: Normalize dash style and surround lists with blank lines.

Consistent em dashes and proper blank lines improve readability and linting.

-These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.
+These are automatically added with sensible defaults when you select the ChatGPT OAuth provider.

 ### Unsupported Parameters
-
-The ChatGPT backend does **not** support these common parameters:
-- `maxTokens` - The backend uses its own internal limits
-- `temperature` - The backend uses its own internal settings
+The ChatGPT backend does **not** support these common parameters:
+
+- `maxTokens` — the backend uses its own internal limits
+- `temperature` — the backend uses its own internal settings
+
-These parameters will be automatically excluded from your config when using ChatGPT OAuth.
+These parameters will be automatically excluded from your config when using ChatGPT OAuth.

67-84: Document modelId collision caveat and keep the example focused.

Adds an explicit “known limitation” note to reduce configuration confusion.

 ### Usage Restrictions

-GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.
+GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.
+
+> Note (modelId collision):
+> If both OpenAI and ChatGPT OAuth providers expose a model with the same ID (e.g., "gpt-5"), the CLI command `task-master models --set-main gpt-5` may select the OpenAI provider by default. To force ChatGPT OAuth, set `"provider": "chatgpt-oauth"` explicitly in your config or edit it manually.

-Example with custom reasoning settings:
+Example with custom reasoning settings:
package.json (1)

83-83: Consider making the ChatGPT OAuth SDK an optionalDependency if it’s lazy-loaded.

Keeps installs lean for users who won’t use this provider; mirrors how gemini-cli is handled. If the module is required at runtime only when selected, this is safe.

   "dependencies": {
@@
-    "ai-sdk-provider-chatgpt-oauth": "1.0.0-ai-sdk-v4",
+    // moved to optionalDependencies
@@
   "optionalDependencies": {
     "@anthropic-ai/claude-code": "^1.0.88",
     "@biomejs/cli-linux-x64": "^1.9.4",
-    "ai-sdk-provider-gemini-cli": "^0.1.1"
+    "ai-sdk-provider-gemini-cli": "^0.1.1",
+    "ai-sdk-provider-chatgpt-oauth": "1.0.0-ai-sdk-v4"
   },

If the provider is not lazy-loaded, keep it in dependencies.

src/constants/providers.js (1)

26-27: Add CHATGPT_OAUTH to CUSTOM_PROVIDERS — LGTM

Provider constant and value naming are consistent with the codebase and CLI conventions. Object→array export via CUSTOM_PROVIDERS_ARRAY ensures the new provider is picked up wherever Object.values is used.

scripts/modules/ai-services-unified.js (1)

44-44: Provider registration for ChatGPT OAuth — LGTM

Import and static PROVIDERS map entry ("chatgpt-oauth": new ChatGPTOAuthProvider()) look correct and satisfy unified service integration.

Also applies to: 73-75

tests/unit/ai-services-unified.test.js (1)

230-236: Make ChatGPT OAuth mock reflect “no API key required”.

Return null for getRequiredApiKeyName to match the convention for no-key providers (e.g., Ollama), while keeping isRequiredApiKey as false.

Apply:

   ChatGPTOAuthProvider: jest.fn(() => ({
     generateText: jest.fn(),
     streamText: jest.fn(),
     generateObject: jest.fn(),
-    getRequiredApiKeyName: jest.fn(() => 'CHATGPT_OAUTH_ACCESS_TOKEN'),
+    getRequiredApiKeyName: jest.fn(() => null),
     isRequiredApiKey: jest.fn(() => false)
   }))
src/utils/json-extract.js (3)

4-6: Promote the scan limit to a top-level constant for clarity.

Apply:

 import { parse as parseJsonc } from 'jsonc-parser';
 import { jsonrepair } from 'jsonrepair';
 
+// Maximum characters to scan during boundary detection.
+// Keep conservative to balance performance and robustness.
+const DEFAULT_SCAN_LIMIT = 200_000;

21-29: Broaden code fence unwrapping to common JSON-ish labels.

Support jsonc/json5/js/javascript/ts/typescript to improve extraction across providers.

Apply:

-		// Remove markdown fences
-		.replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
+		// Remove markdown fences (json/jsonc/json5/js/javascript/ts/typescript)
+		.replace(
+			/^.*?```(?:jsonc?|json5|js|javascript|ts|typescript)?\s*([\s\S]*?)\s*```.*$/i,
+			'$1'
+		)

65-99: Make boundary detection JSONC-aware and lift the 10k cap.

Braces inside comments can corrupt depth counting. Also, 10k can truncate valid payloads.

Apply:

-	// Single-pass boundary detection
-	const openChar = content[0];
-	const closeChar = openChar === '{' ? '}' : ']';
-	let depth = 0;
-	let inString = false;
-	let escapeNext = false;
-	let lastValidEnd = -1;
-
-	for (let i = 0; i < content.length && i < 10000; i++) {
-		const ch = content[i];
+	// Single-pass boundary detection
+	const openChar = content[0];
+	const closeChar = openChar === '{' ? '}' : ']';
+	let depth = 0;
+	let inString = false;
+	let escapeNext = false;
+	let inLineComment = false;
+	let inBlockComment = false;
+
+	// Allow larger JSON payloads; still bounded for safety
+	const SCAN_LIMIT = Math.min(content.length, DEFAULT_SCAN_LIMIT);
+
+	for (let i = 0; i < SCAN_LIMIT; i++) {
+		const ch = content[i];
+		const next = content[i + 1];
+
+		// Handle line/block comments when not inside strings
+		if (!inString) {
+			if (inLineComment) {
+				if (ch === '\n') inLineComment = false;
+				continue;
+			}
+			if (inBlockComment) {
+				if (ch === '*' && next === '/') {
+					inBlockComment = false;
+					i++; // skip '/'
+				}
+				continue;
+			}
+			if (ch === '/' && next === '/') {
+				inLineComment = true;
+				i++; // skip second '/'
+				continue;
+			}
+			if (ch === '/' && next === '*') {
+				inBlockComment = true;
+				i++; // skip '*'
+				continue;
+			}
+		}
 
 		if (escapeNext) {
 			escapeNext = false;
 			continue;
 		}
 		if (ch === '\\') {
 			escapeNext = true;
 			continue;
 		}
 		if (ch === '"') {
 			inString = !inString;
 			continue;
 		}
 		if (inString) continue;
 		if (ch === openChar) depth++;
 		else if (ch === closeChar) {
 			depth--;
 			if (depth === 0) {
-				lastValidEnd = i + 1;
-				const candidate = content.slice(0, lastValidEnd);
+				const candidate = content.slice(0, i + 1);
 				const parsed = tryParse(candidate);
 				if (parsed !== undefined) return parsed;
 			}
 		}
 	}
src/ai-providers/chatgpt-oauth.js (1)

134-157: Remove maxTokens/temperature from generateObject call (duplicate of prior review)

Forwarding maxTokens and temperature contradicts the provider contract and risks 4xxs. Also aligns with your own stated exclusions.

Apply this diff:

   const result = await generateText({
     model: client(params.modelId),
     messages,
-    maxTokens: params.maxTokens,
-    temperature: params.temperature
   });

Comment on lines +957 to 959
CUSTOM_PROVIDERS.MCP,
CUSTOM_PROVIDERS.CHATGPT_OAUTH
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Also short-circuit getMcpApiKeyStatus for chatgpt-oauth (and gemini-cli)

You updated isApiKeySet and the exported providersWithoutApiKeys to include chatgpt-oauth, but getMcpApiKeyStatus still does not immediately return true for it (or gemini-cli). This can cause false warnings when checking .cursor/mcp.json for a provider that doesn’t use API keys.

Apply the following change outside the shown hunk in getMcpApiKeyStatus (within the switch over providerName) to keep behavior consistent:

// In getMcpApiKeyStatus() switch(providerName) { ... }
case 'gemini-cli':
  return true; // No API key needed
case 'chatgpt-oauth':
  return true; // OAuth via Codex; no API key in MCP env

Optional: to avoid list drift between isApiKeySet and getMcpApiKeyStatus, add a small helper (or move providersWithoutApiKeys above) and reuse it for an early return in getMcpApiKeyStatus.


🏁 Script executed:

#!/bin/bash
# Verify MCP short-circuit coverage for no-key providers
rg -nP "getMcpApiKeyStatus\\b" -n -C5 scripts/modules/config-manager.js
rg -nP "case 'gemini-cli'\\s*:\\s*return true" scripts/modules/config-manager.js || echo "MISSING: gemini-cli short-circuit"
rg -nP "case 'chatgpt-oauth'\\s*:\\s*return true" scripts/modules/config-manager.js || echo "MISSING: chatgpt-oauth short-circuit"

Length of output: 1335


Short-circuit getMcpApiKeyStatus for chatgpt-oauth and gemini-cli

File: scripts/modules/config-manager.js
Location: inside the getMcpApiKeyStatus(providerName, …) switch (around line 712)

The function currently lacks explicit cases for the no-key providers you added to providersWithoutApiKeys. Without these, users will see spurious “missing API key” warnings for chatgpt-oauth and gemini-cli.

Please add the following cases in the existing switch (providerName) { … } block:

   switch (providerName) {
     // … existing cases …
+    case 'gemini-cli':
+      return true;   // No API key needed
+    case 'chatgpt-oauth':
+      return true;   // OAuth via Codex; no API key in MCP env
     // … default or other cases …
   }

Optional refactor (to keep these lists in sync): extract your providersWithoutApiKeys array into a shared helper and at the top of getMcpApiKeyStatus, do:

if (providersWithoutApiKeys.includes(providerName)) {
  return true;
}

This avoids duplicating the list and drifting over time.

🤖 Prompt for AI Agents
scripts/modules/config-manager.js around lines 712 and the providers array at
957-959: the getMcpApiKeyStatus switch lacks cases for the no-key providers
(chatgpt-oauth and gemini-cli), causing false “missing API key” warnings; add
explicit cases in the existing switch for the chatgpt-oauth and gemini-cli
provider names that short-circuit and return true, or alternatively extract the
providersWithoutApiKeys array into a shared helper and at the top of
getMcpApiKeyStatus do a providersWithoutApiKeys.includes(providerName) check
that returns true to avoid duplicating the list.

Comment on lines +605 to 623
// Handle provider-specific parameter configuration
if (determinedProvider === CUSTOM_PROVIDERS.CHATGPT_OAUTH) {
// ChatGPT OAuth doesn't support maxTokens or temperature - remove them
delete currentConfig.models[role].maxTokens;
delete currentConfig.models[role].temperature;

// Add supported reasoning controls with defaults if not already present
if (typeof currentConfig.models[role].reasoningEffort === 'undefined') {
currentConfig.models[role].reasoningEffort = 'medium'; // provider default
}
if (typeof currentConfig.models[role].reasoningSummary === 'undefined') {
currentConfig.models[role].reasoningSummary = 'auto'; // provider default
}
} else {
// For other providers, handle maxTokens from supported-models.json
if (modelData && modelData.max_tokens) {
currentConfig.models[role].maxTokens = modelData.max_tokens;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Enforce provider-specific params for ChatGPT OAuth.

Good: removing maxTokens/temperature and adding reasoning defaults aligns with provider behavior. Consider centralizing provider-specific param rules to avoid scattering if more providers need exclusions.

🤖 Prompt for AI Agents
In scripts/modules/task-manager/models.js around lines 605 to 623, the
provider-specific handling for ChatGPT OAuth (removing maxTokens/temperature and
injecting reasoning defaults) is implemented inline; extract and centralize
these rules into a provider-params helper (or config map) so future providers
can declare allowed, excluded and default parameters in one place. Refactor this
block to call the helper to: 1) remove blacklisted params for the determined
provider, 2) apply provider defaults only when undefined, and 3) fall back to
existing modelData logic for other providers; update any references/tests to use
the new helper.

Comment on lines +8 to +12
import { BaseAIProvider } from './base-provider.js';
import { log } from '../../scripts/modules/utils.js';
import { generateText } from 'ai';
import { extractJsonTolerant } from '../utils/json-extract.js';

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: switch to ai.generateObject to leverage schema-guided prompting

Today you enforce JSON with a plain instruction and only validate with Zod after the fact; the model never sees the schema. Using ai.generateObject yields better adherence by embedding the Zod schema into the prompt internally (provider-agnostic), while still allowing your tolerant extractor as a fallback if desired.

Apply these diffs (imports + implementation). This also inherently avoids forwarding unsupported params:

- import { generateText } from 'ai';
+ import { generateText, generateObject as aiGenerateObject } from 'ai';
@@
-ChatGPTOAuthProvider.prototype.generateObject = async function (params) {
+ChatGPTOAuthProvider.prototype.generateObject = async function (params) {
   try {
     this.validateParams(params);
     this.validateMessages(params.messages);
     if (!params.schema)
       throw new Error('Schema is required for object generation');
     if (!params.objectName)
       throw new Error('Object name is required for object generation');

     // Demote system prompts to user to avoid contaminating ChatGPT OAuth instructions
     const demoted = this._demoteSystemToUser(params.messages);

-    // Prepend strict JSON enforcement as a user message
-    const jsonEnforcement =
-      'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
-    const messages = [{ role: 'user', content: jsonEnforcement }, ...demoted];
+    const messages = demoted;

     const client = await this.getClient(params);
-    const result = await generateText({
-      model: client(params.modelId),
-      messages,
-      maxTokens: params.maxTokens,
-      temperature: params.temperature
-    });
-
-    const jsonText = extractJsonTolerant(result.text || '');
-    let parsed;
-    try {
-      parsed = JSON.parse(jsonText);
-    } catch (e) {
-      throw new Error(
-        `Failed to parse JSON from ChatGPT OAuth response: ${e.message}`
-      );
-    }
-
-    // Validate against provided Zod schema
-    const validated = params.schema.parse(parsed);
-
-    return {
-      object: validated,
-      usage: {
-        inputTokens: result.usage?.promptTokens,
-        outputTokens: result.usage?.completionTokens,
-        totalTokens: result.usage?.totalTokens
-      }
-    };
+    const result = await aiGenerateObject({
+      model: client(params.modelId),
+      messages,
+      schema: params.schema
+    });
+    return {
+      object: result.object,
+      usage: {
+        inputTokens: result.usage?.inputTokens,
+        outputTokens: result.usage?.outputTokens,
+        totalTokens: result.usage?.totalTokens
+      }
+    };
   } catch (error) {
     this.handleError('object generation', error);
   }
 };

If ai.generateObject proves incompatible with this provider during manual tests, keep the current approach but at least retain the earlier fixes (drop unsupported params and robust usage accounting). I can wire a tolerant-repair fallback path around aiGenerateObject if needed.

Also applies to: 134-183

🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js around lines 8-12 (and similarly apply to
134-183), replace the current generateText-based flow with ai.generateObject:
update imports to use generateObject from 'ai', remove/stop forwarding any
unsupported provider-specific params before calling the API, pass the Zod schema
to generateObject so the model sees the schema, and record usage/response cost
robustly as you do now; also keep your tolerant extractor as a fallback path to
parse or repair the model output if generateObject is incompatible during manual
testing (if incompatible, fall back to the existing generateText +
extractJsonTolerant approach but retain the unsupported-param pruning and
improved usage accounting).

Comment on lines +90 to +94
/**
* GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
* If needed in the future, we can override prepareTokenParam to suppress it.
*/
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Do not forward unsupported params (maxTokens, temperature) for ChatGPT OAuth

Per design, ChatGPT OAuth ignores token/temperature controls and may 4xx on unknown fields. Your overrides still route through BaseAIProvider which forwards temperature, and your custom generateObject forwards both.

Apply these diffs to suppress them consistently:

@@
 export class ChatGPTOAuthProvider extends BaseAIProvider {
@@
   /**
    * GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
-   * If needed in the future, we can override prepareTokenParam to suppress it.
+   * Suppress token/temperature controls to avoid provider-side 4xxs.
    */
 }
+
+// Drop maxTokens universally
+ChatGPTOAuthProvider.prototype.prepareTokenParam = function _noTokens() {
+  return {};
+};
@@
 ChatGPTOAuthProvider.prototype.generateText = async function (params) {
-  const processed = {
-    ...params,
-    messages: this._demoteSystemToUser(params.messages)
-  };
+  const { maxTokens, temperature, ...rest } = params ?? {};
+  const processed = {
+    ...rest,
+    messages: this._demoteSystemToUser(rest.messages)
+  };
   return BaseAIProvider.prototype.generateText.call(this, processed);
 };
@@
 ChatGPTOAuthProvider.prototype.streamText = async function (params) {
-  const processed = {
-    ...params,
-    messages: this._demoteSystemToUser(params.messages)
-  };
+  const { maxTokens, temperature, ...rest } = params ?? {};
+  const processed = {
+    ...rest,
+    messages: this._demoteSystemToUser(rest.messages)
+  };
   return BaseAIProvider.prototype.streamText.call(this, processed);
 };

Run unit tests for this provider to ensure that the unified layer also filters these fields (defense in depth).

Also applies to: 117-125, 126-133


🏁 Script executed:

#!/bin/bash
set -e
# Locate the ChatGPTOAuthProvider class and its parameter-handling methods
rg -n "class ChatGPTOAuthProvider" -n src/ai-providers/chatgpt-oauth.js
rg -n "prepareTokenParam" -n src/ai-providers/chatgpt-oauth.js
rg -n "generateText" -n src/ai-providers/chatgpt-oauth.js
rg -n "streamText" -n src/ai-providers/chatgpt-oauth.js

Length of output: 777


🏁 Script executed:

#!/bin/bash
set -e

# Display header and imports of the ChatGPTOAuthProvider file
sed -n '1,60p' src/ai-providers/chatgpt-oauth.js

# Display the generateText and streamText overrides
sed -n '100,160p' src/ai-providers/chatgpt-oauth.js

# Search for any generateObject override in the repo
rg -n "generateObject" -n src

# Locate the BaseAIProvider definition
rg -n "class BaseAIProvider" -n src

# Find any prepareTokenParam implementations across the repo
rg -n "prepareTokenParam" -n src

Length of output: 6406


🏁 Script executed:

#!/bin/bash
set -e

# Show prepareTokenParam and its usage in generateText/streamText in BaseAIProvider
sed -n '130,220p' src/ai-providers/base-provider.js

Length of output: 2560


Suppress maxTokens and temperature for ChatGPT OAuth Provider
ChatGPT OAuth rejects unknown parameters and will 4xx on maxTokens or temperature. Currently your overrides merely pass these through to BaseAIProvider, which in turn forwards them via prepareTokenParam and the temperature field. You must strip them out at the ChatGPT-OAuth layer.

• Override prepareTokenParam to always return {}.
• In generateText (lines 118–123) and streamText (lines 126–131), destructure away maxTokens and temperature before calling the base methods.
• In generateObject (starting at line 134), remove the maxTokens and temperature fields from the call to generateText.

Example diff:

 export class ChatGPTOAuthProvider extends BaseAIProvider {
+  // ChatGPT OAuth ignores token controls and errors on unknown params
+  prepareTokenParam() {
+    return {};
+  }
 }

 ChatGPTOAuthProvider.prototype.generateText = async function (params) {
-  const processed = {
-    ...params,
-    messages: this._demoteSystemToUser(params.messages)
-  };
+  const { maxTokens, temperature, ...rest } = params ?? {};
+  const processed = {
+    ...rest,
+    messages: this._demoteSystemToUser(rest.messages)
+  };
   return BaseAIProvider.prototype.generateText.call(this, processed);
 };

 ChatGPTOAuthProvider.prototype.streamText = async function (params) {
-  const processed = {
-    ...params,
-    messages: this._demoteSystemToUser(params.messages)
-  };
+  const { maxTokens, temperature, ...rest } = params ?? {};
+  const processed = {
+    ...rest,
+    messages: this._demoteSystemToUser(rest.messages)
+  };
   return BaseAIProvider.prototype.streamText.call(this, processed);
 };

 ChatGPTOAuthProvider.prototype.generateObject = async function (params) {
   // …
-  const result = await generateText({
-    model: client(params.modelId),
-    messages,
-    maxTokens: params.maxTokens,
-    temperature: params.temperature
-  });
+  const result = await generateText({
+    model: client(params.modelId),
+    messages
+  });
   // …
 };

Additionally, add or update unit tests for ChatGPTOAuthProvider to assert that neither maxTokens nor temperature ever reach the underlying SDK call.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
* If needed in the future, we can override prepareTokenParam to suppress it.
*/
}
export class ChatGPTOAuthProvider extends BaseAIProvider {
/**
* GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
* If needed in the future, we can override prepareTokenParam to suppress it.
*/
// ChatGPT OAuth ignores token controls and errors on unknown params
prepareTokenParam() {
return {};
}
}
ChatGPTOAuthProvider.prototype.generateText = async function (params) {
const { maxTokens, temperature, ...rest } = params ?? {};
const processed = {
...rest,
messages: this._demoteSystemToUser(rest.messages)
};
return BaseAIProvider.prototype.generateText.call(this, processed);
};
ChatGPTOAuthProvider.prototype.streamText = async function (params) {
const { maxTokens, temperature, ...rest } = params ?? {};
const processed = {
...rest,
messages: this._demoteSystemToUser(rest.messages)
};
return BaseAIProvider.prototype.streamText.call(this, processed);
};
ChatGPTOAuthProvider.prototype.generateObject = async function (params) {
// …
const result = await generateText({
model: client(params.modelId),
messages
});
// …
return result;
};
🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js around lines 90–134, the ChatGPT OAuth
provider currently forwards maxTokens and temperature to BaseAIProvider which
causes 4xx errors; override prepareTokenParam to always return {} and in
generateText (lines ~118–123) and streamText (lines ~126–131) destructure and
remove maxTokens and temperature from the args before calling the base methods,
and in generateObject (starting ~134) remove those fields from the generateText
call; finally add/update unit tests for ChatGPTOAuthProvider to assert that
neither maxTokens nor temperature are passed to the underlying SDK calls.

Comment on lines +146 to +150
// Prepend strict JSON enforcement as a user message
const jsonEnforcement =
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
const messages = [{ role: 'user', content: jsonEnforcement }, ...demoted];

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use objectName in the JSON enforcement prompt or drop it as a requirement

objectName is validated but never used. Either remove it from validation or incorporate it into the instruction to improve adherence.

Apply this diff to use it:

-  const jsonEnforcement =
-    'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
+  const jsonEnforcement = `CRITICAL: You MUST respond with ONLY valid JSON for the "${params.objectName}" object. Do not include explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the requested JSON object.`;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Prepend strict JSON enforcement as a user message
const jsonEnforcement =
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
const messages = [{ role: 'user', content: jsonEnforcement }, ...demoted];
// Prepend strict JSON enforcement as a user message
const jsonEnforcement = `CRITICAL: You MUST respond with ONLY valid JSON for the "${params.objectName}" object. Do not include explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the requested JSON object.`;
const messages = [{ role: 'user', content: jsonEnforcement }, ...demoted];
🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js around lines 146 to 150, the JSON
enforcement prompt references a strict JSON requirement but ignores the
validated variable objectName; update the prompt to require the top-level object
to use the specific objectName (e.g., "Return exactly the object requested with
the top-level key '<objectName>' and nothing else") or alternatively remove the
objectName validation entirely; implement the chosen approach by interpolating
objectName into the jsonEnforcement string when present (or deleting the
objectName check), and ensure the messages array uses the updated prompt so the
model is explicitly instructed to produce a JSON object with that exact key.

export { VertexAIProvider } from './google-vertex.js';
export { ClaudeCodeProvider } from './claude-code.js';
export { GeminiCliProvider } from './gemini-cli.js';
export { ChatGPTOAuthProvider } from './chatgpt-oauth.js';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify provider module shape and lazy loading.

Ensure the new module adheres to the provider contract and uses dynamic import to avoid init-order issues.

Run:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Check file exists"
fd -a --strip-cwd-prefix -t f '^chatgpt-oauth\.js$' src/ai-providers

echo "2) Verify required exports"
rg -n -C1 -e 'export\s+async?\s*function\s+generateChatGPTOAuthText\(' \
            -e 'export\s+async?\s*function\s+streamChatGPTOAuthText\(' \
            -e 'export\s+async?\s*function\s+generateChatGPTOAuthObject\(' src/ai-providers/chatgpt-oauth.js

echo "3) Verify dynamic import of SDK provider"
rg -n -C2 -e 'import\([^\)]*ai-sdk-provider-chatgpt-oauth' src/ai-providers/chatgpt-oauth.js || {
  echo "WARN: dynamic import not found; checking for static import"
  rg -n -C2 -e "from\s+['\"]ai-sdk-provider-chatgpt-oauth['\"]" src/ai-providers/chatgpt-oauth.js
}

echo "4) Verify usage of core ai helpers and log utility"
rg -n -C2 -e "from\s+['\"]ai['\"]\s*;?" -e "generate(Text|Object)|streamText" src/ai-providers/chatgpt-oauth.js
rg -n -C2 -e "from\s+['\"]\.\./\.\./scripts/modules/utils\.js['\"]" -e "from\s+['\"]\.\./scripts/modules/utils\.js['\"]" src/ai-providers/chatgpt-oauth.js

Length of output: 333


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Check file exists"
# Removed --strip-cwd-prefix to avoid conflicting fd options
fd -a -t f '^chatgpt-oauth\.js$' src/ai-providers

echo "2) Verify required exports"
rg -n -C1 -e 'export\s+async?\s*function\s+generateChatGPTOAuthText\(' \
            -e 'export\s+async?\s*function\s+streamChatGPTOAuthText\(' \
            -e 'export\s+async?\s*function\s+generateChatGPTOAuthObject\(' src/ai-providers/chatgpt-oauth.js

echo "3) Verify dynamic import of SDK provider"
rg -n -C2 -e 'import\([^\)]*ai-sdk-provider-chatgpt-oauth' src/ai-providers/chatgpt-oauth.js || {
  echo "WARN: dynamic import not found; checking for static import"
  rg -n -C2 -e "from\s+['\"]ai-sdk-provider-chatgpt-oauth['\"]" src/ai-providers/chatgpt-oauth.js
}

echo "4) Verify usage of core ai helpers and log utility"
rg -n -C2 -e "from\s+['\"]ai['\"]\s*;?" -e "generate(Text|Object)|streamText" src/ai-providers/chatgpt-oauth.js
rg -n -C2 -e "from\s+['\"]\.\./\.\./scripts/modules/utils\.js['\"]" -e "from\s+['\"]\.\./scripts/modules/utils\.js['\"]" src/ai-providers/chatgpt-oauth.js

Length of output: 467


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "### chatgpt-oauth.js content (first 200 lines) ###"
sed -n '1,200p' src/ai-providers/chatgpt-oauth.js

Length of output: 6189


Missing provider wrapper exports in chatgpt-oauth.js

The chatgpt-oauth module correctly uses a dynamic import('ai-sdk-provider-chatgpt-oauth') to lazily load the SDK (👍), and you’ve imported generateText and log per the core-helper requirements. However, it does not export the three top-level functions our provider contract mandates.

Please add these named exports so that callers can invoke the provider directly without instantiating the class:

export async function generateChatGPTOAuthText(params) { … }
export async function streamChatGPTOAuthText(params) { … }
export async function generateChatGPTOAuthObject(params) { … }

Each should internally instantiate new ChatGPTOAuthProvider() and delegate to its generateText, streamText, or generateObject method respectively.

File needing updates:

  • src/ai-providers/chatgpt-oauth.js

Suggested diff sketch:

 // at the bottom of chatgpt-oauth.js
+export async function generateChatGPTOAuthText(params) {
+  const provider = new ChatGPTOAuthProvider();
+  return provider.generateText(params);
+}
+
+export async function streamChatGPTOAuthText(params) {
+  const provider = new ChatGPTOAuthProvider();
+  return provider.streamText(params);
+}
+
+export async function generateChatGPTOAuthObject(params) {
+  const provider = new ChatGPTOAuthProvider();
+  return provider.generateObject(params);
+}

With these in place, the module shape fully conforms to our provider-contract and can be dynamically loaded without init-order issues.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export { ChatGPTOAuthProvider } from './chatgpt-oauth.js';
// ─────────────────────────────────────────────────────────────────────────────
// (At the bottom of src/ai-providers/chatgpt-oauth.js, after the class/provider)
// ─────────────────────────────────────────────────────────────────────────────
export async function generateChatGPTOAuthText(params) {
const provider = new ChatGPTOAuthProvider();
return provider.generateText(params);
}
export async function streamChatGPTOAuthText(params) {
const provider = new ChatGPTOAuthProvider();
return provider.streamText(params);
}
export async function generateChatGPTOAuthObject(params) {
const provider = new ChatGPTOAuthProvider();
return provider.generateObject(params);
}
🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js (around where ChatGPTOAuthProvider is
defined/exported), the module currently lacks the three top-level wrapper
exports required by the provider contract; add the following exported async
functions: generateChatGPTOAuthText(params), streamChatGPTOAuthText(params), and
generateChatGPTOAuthObject(params). Each function should instantiate a new
ChatGPTOAuthProvider() and delegate to the corresponding instance method
(generateText, streamText, generateObject), returning or yielding the result as
appropriate, so callers can import and call these functions directly without
creating the provider class themselves.

Comment on lines 813 to 833
test('should not require API key for ChatGPT OAuth provider', async () => {
// This test verifies that ChatGPT OAuth is in the list of providers
// that don't require API keys, similar to Ollama and Claude Code

// The actual provider mock is already set up in the beforeEach
// We just need to verify the behavior

// ChatGPT OAuth should be treated like other no-API-key providers
const chatgptOAuthProvider = new (await import('../../src/ai-providers/index.js')).ChatGPTOAuthProvider();

// Verify it doesn't require an API key
expect(chatgptOAuthProvider.isRequiredApiKey()).toBe(false);

// Verify it returns a key name for display purposes
expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBe('CHATGPT_OAUTH_ACCESS_TOKEN');
});

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix test to assert null API key name for ChatGPT OAuth.

The provider is no-key; the test should not expect a token name.

Apply:

-  // Verify it returns a key name for display purposes
-  expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBe('CHATGPT_OAUTH_ACCESS_TOKEN');
+  // No API key name for no-key providers
+  expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBeNull();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('should not require API key for ChatGPT OAuth provider', async () => {
// This test verifies that ChatGPT OAuth is in the list of providers
// that don't require API keys, similar to Ollama and Claude Code
// The actual provider mock is already set up in the beforeEach
// We just need to verify the behavior
// ChatGPT OAuth should be treated like other no-API-key providers
const chatgptOAuthProvider = new (await import('../../src/ai-providers/index.js')).ChatGPTOAuthProvider();
// Verify it doesn't require an API key
expect(chatgptOAuthProvider.isRequiredApiKey()).toBe(false);
// Verify it returns a key name for display purposes
expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBe('CHATGPT_OAUTH_ACCESS_TOKEN');
});
// Verify it doesn't require an API key
expect(chatgptOAuthProvider.isRequiredApiKey()).toBe(false);
// No API key name for no-key providers
expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBeNull();
🤖 Prompt for AI Agents
In tests/unit/ai-services-unified.test.js around lines 813 to 829, the test
incorrectly asserts a non-null API key name for the ChatGPT OAuth provider;
since this provider is a no-key provider the test should assert null. Change the
expectation from getRequiredApiKeyName() === 'CHATGPT_OAUTH_ACCESS_TOKEN' to
expect(getRequiredApiKeyName()).toBeNull(), and adjust the surrounding comment
to state that no API key name is returned for display.

Comment on lines 830 to 850
test('should handle ChatGPT OAuth provider with reasoning parameters', async () => {
// This test verifies that the ChatGPT OAuth provider mock is set up correctly
// in the test environment and would forward reasoning parameters

// The mock is already configured in beforeEach
const chatgptOAuthProvider = new (await import('../../src/ai-providers/index.js')).ChatGPTOAuthProvider();

// Verify the provider exists and has the expected methods
expect(chatgptOAuthProvider.generateText).toBeDefined();
expect(chatgptOAuthProvider.isRequiredApiKey).toBeDefined();
expect(chatgptOAuthProvider.getRequiredApiKeyName).toBeDefined();

// Verify it's configured as a no-API-key provider
expect(chatgptOAuthProvider.isRequiredApiKey()).toBe(false);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Cover reasoning parameter handling for ChatGPT OAuth end-to-end.

This test only checks method presence. Add an assertion that generateTextService strips maxTokens/temperature and forwards reasoningEffort/reasoningSummary when provider is chatgpt-oauth.

I can draft a focused unit test that sets main provider to chatgpt-oauth, stubs ChatGPTOAuthProvider.generateText, calls generateTextService({ role: 'main', … }), and asserts the params exclude maxTokens/temperature and include reasoningEffort/reasoningSummary defaults. Want me to add it?

🤖 Prompt for AI Agents
In tests/unit/ai-services-unified.test.js around lines 830 to 844, the test only
verifies method presence for ChatGPTOAuthProvider but must also assert
generateTextService behavior for chatgpt-oauth: add a focused unit test that (1)
sets the main provider to 'chatgpt-oauth' in the services config, (2) stubs
ChatGPTOAuthProvider.generateText to capture its input and return a resolved
value, (3) calls generateTextService with an input containing maxTokens and
temperature plus no reasoningEffort/reasoningSummary so defaults apply, and (4)
asserts the stub was called with parameters that do NOT include maxTokens or
temperature and that DO include reasoningEffort and reasoningSummary (with
expected defaults). Ensure you restore the stub and reset provider config after
the test.

Comment on lines +8 to +16
describe('extractJsonTolerant', () => {
let extractJsonTolerant;

beforeEach(async () => {
// Reset modules before each test
jest.resetModules();
const module = await import('../../../src/utils/json-extract.js');
extractJsonTolerant = module.extractJsonTolerant;
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove async dynamic import; use a static import (tests must be synchronous).

No mocks here, so there’s no reason to re-import the module per test. This violates the “never use async in tests” rule and adds overhead.

Apply:

-import { jest } from '@jest/globals';
-
-describe('extractJsonTolerant', () => {
-	let extractJsonTolerant;
-
-	beforeEach(async () => {
-		// Reset modules before each test
-		jest.resetModules();
-		const module = await import('../../../src/utils/json-extract.js');
-		extractJsonTolerant = module.extractJsonTolerant;
-	});
+import { jest } from '@jest/globals';
+import { extractJsonTolerant } from '../../../src/utils/json-extract.js';
+
+describe('extractJsonTolerant', () => {
+	// No async module loading; tests remain synchronous
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
describe('extractJsonTolerant', () => {
let extractJsonTolerant;
beforeEach(async () => {
// Reset modules before each test
jest.resetModules();
const module = await import('../../../src/utils/json-extract.js');
extractJsonTolerant = module.extractJsonTolerant;
});
import { jest } from '@jest/globals';
import { extractJsonTolerant } from '../../../src/utils/json-extract.js';
describe('extractJsonTolerant', () => {
// No async module loading; tests remain synchronous
🤖 Prompt for AI Agents
In tests/unit/utils/json-extract.test.js around lines 8-16, remove the async
dynamic import inside beforeEach and replace it with a synchronous static import
at the top of the test file; remove the unnecessary jest.resetModules() and the
await import call so the tests remain synchronous and avoid per-test re-import
overhead, and update beforeEach to no longer be async (or remove it entirely if
it only performed the dynamic import).

Comment on lines +89 to +111
describe('JSONC features (comments and trailing commas)', () => {
it('should handle trailing commas', () => {
const input = '{"a": 1, "b": 2,}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ a: 1, b: 2 });
});

it('should handle trailing commas in arrays', () => {
const input = '[1, 2, 3,]';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual([1, 2, 3]);
});

it('should handle nested trailing commas', () => {
const input = '{"arr": [1, 2,], "obj": {"x": 1,},}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ arr: [1, 2], obj: { x: 1 } });
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add tests for JSONC comments to lock in behavior.

You cover trailing commas, but not comments. Add single-line (//) and block (/* */) comment cases to prevent regressions when the extractor is updated to be JSONC-aware during boundary detection.

Apply:

  describe('JSONC features (comments and trailing commas)', () => {
+    it('should handle line comments (//)', () => {
+      const input = '{\n  "a": 1, // trailing info\n  "b": 2\n}';
+      const result = extractJsonTolerant(input);
+      const parsed = JSON.parse(result);
+      expect(parsed).toEqual({ a: 1, b: 2 });
+    });
+
+    it('should handle block comments (/* */)', () => {
+      const input = '/* header */\n{\n  "a": 1,\n  /* mid */ "b": 2\n} /* tail */';
+      const result = extractJsonTolerant(input);
+      const parsed = JSON.parse(result);
+      expect(parsed).toEqual({ a: 1, b: 2 });
+    });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
describe('JSONC features (comments and trailing commas)', () => {
it('should handle trailing commas', () => {
const input = '{"a": 1, "b": 2,}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ a: 1, b: 2 });
});
it('should handle trailing commas in arrays', () => {
const input = '[1, 2, 3,]';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual([1, 2, 3]);
});
it('should handle nested trailing commas', () => {
const input = '{"arr": [1, 2,], "obj": {"x": 1,},}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ arr: [1, 2], obj: { x: 1 } });
});
});
describe('JSONC features (comments and trailing commas)', () => {
it('should handle line comments (//)', () => {
const input = '{\n "a": 1, // trailing info\n "b": 2\n}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ a: 1, b: 2 });
});
it('should handle block comments (/* */)', () => {
const input = '/* header */\n{\n "a": 1,\n /* mid */ "b": 2\n} /* tail */';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ a: 1, b: 2 });
});
it('should handle trailing commas', () => {
const input = '{"a": 1, "b": 2,}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ a: 1, b: 2 });
});
it('should handle trailing commas in arrays', () => {
const input = '[1, 2, 3,]';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual([1, 2, 3]);
});
it('should handle nested trailing commas', () => {
const input = '{"arr": [1, 2,], "obj": {"x": 1,},}';
const result = extractJsonTolerant(input);
const parsed = JSON.parse(result);
expect(parsed).toEqual({ arr: [1, 2], obj: { x: 1 } });
});
});
🤖 Prompt for AI Agents
In tests/unit/utils/json-extract.test.js around lines 89 to 110, the test suite
covers trailing commas but lacks cases for JSONC-style comments; add unit tests
that exercise single-line (// ...) and block (/* ... */) comments in objects and
arrays and mixed with trailing commas and nested structures so
extractJsonTolerant preserves valid JSON after comment removal; ensure each test
feeds an input string containing comments, calls extractJsonTolerant(input),
parses the result with JSON.parse, and asserts equality with the expected JS
value for object, array, and nested cases.

…I SDK v4

- New provider using ai-sdk-provider-chatgpt-oauth (no API key; OAuth tokens via Codex login)
- Wire into unified services and provider registry; export in ai-providers index
- Add model gpt-5 with cost=0 and roles [main, fallback]; no research role (no live retrieval)
- Preserve required ChatGPT backend instructions: demote Task Master system prompts to user
- Implement robust object generation: enforce JSON-only output, extract/repair JSON, validate with Zod
- Centralize tolerant JSON extraction using jsonc-parser and reuse across providers
- Expose optional reasoning controls (reasoningEffort, reasoningSummary) via config
- Mark provider as no-API-key-required in key checks
- Exclude unsupported maxTokens and temperature parameters from config
- Include reasoning defaults when ChatGPT OAuth is selected in setup
- Docs: add ChatGPT OAuth usage guide with reasoning settings and role guidance
- Tests: update provider mocks; all tests pass
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
package.json (1)

14-27: Add missing test script aliases to meet guidelines.

Include test:unit, test:integration, and test:ci entries.

Apply:

   "scripts": {
     "test": "node --experimental-vm-modules node_modules/.bin/jest",
     "test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
     "test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
     "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
+    "test:unit": "node --experimental-vm-modules node_modules/.bin/jest --coverage --testPathPattern=tests/unit",
+    "test:integration": "node --experimental-vm-modules node_modules/.bin/jest --coverage --testPathPattern=tests/integration",
+    "test:ci": "node --experimental-vm-modules node_modules/.bin/jest --runInBand",
     "test:e2e": "./tests/e2e/run_e2e.sh",
     "test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
     "prepare": "chmod +x bin/task-master.js mcp-server/server.js",
     "changeset": "changeset",
     "release": "changeset publish",
     "inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js",
     "mcp-server": "node mcp-server/server.js",
     "format-check": "biome format .",
     "format": "biome format . --write"
   },
scripts/modules/config-manager.js (1)

961-1013: Export getModelConfigForRole so unified services can forward provider-specific options (reasoningEffort/summary).

Without exporting this, ai-services-unified cannot read role-level extras and will silently drop reasoning options.

 export {
   // Core config access
   getConfig,
   writeConfig,
   ConfigurationError,
   isConfigFilePresent,
   // Claude Code settings
   getClaudeCodeSettings,
   getClaudeCodeSettingsForCommand,
   // Validation
   validateProvider,
   validateProviderModelCombination,
   validateClaudeCodeSettings,
   VALIDATED_PROVIDERS,
   CUSTOM_PROVIDERS,
   ALL_PROVIDERS,
   MODEL_MAP,
   getAvailableModels,
   // Role-specific getters (No env var overrides)
+  getModelConfigForRole,
   getMainProvider,
   getMainModelId,
   getMainMaxTokens,
   getMainTemperature,
   getResearchProvider,
   getResearchModelId,
   getResearchMaxTokens,
   getResearchTemperature,
   getFallbackProvider,
   getFallbackModelId,
   getFallbackMaxTokens,
   getFallbackTemperature,
   getBaseUrlForRole,
   // Global setting getters (No env var overrides)
   getLogLevel,
   getDebugFlag,
   getDefaultNumTasks,
   getDefaultSubtasks,
   getDefaultPriority,
   getProjectName,
   getOllamaBaseURL,
   getAzureBaseURL,
   getBedrockBaseURL,
   getResponseLanguage,
   getParametersForRole,
   getUserId,
   // API Key Checkers (still relevant)
   isApiKeySet,
   getMcpApiKeyStatus,
   // ADD: Function to get all provider names
   getAllProviders,
   getVertexProjectId,
   getVertexLocation
 };
♻️ Duplicate comments (27)
src/constants/providers.js (1)

26-27: Expose CHATGPT_OAUTH in the unified CLI providers (verify wiring).

Ensure the unified CLI lists and enables the new provider (e.g., in scripts/modules/ai-services-unified.js PROVIDERS/CUSTOM_PROVIDERS_ARRAY). Similar request was raised previously; confirming it’s wired avoids discoverability gaps in --help/list-providers.

Run to verify CLI wiring and config propagation:

#!/bin/bash
set -euo pipefail

echo "1) Provider constant usage"
rg -nP "\bCHATGPT_OAUTH\b" -C2

echo "2) Unified CLI providers list contains chatgpt-oauth"
rg -nP "(PROVIDERS|CUSTOM_PROVIDERS_ARRAY)[^\n]*\[" scripts/modules/ai-services-unified.js -n -C2 || true
rg -nP "CHATGPT_OAUTH" scripts/modules/ai-services-unified.js -n -C2 || true

echo "3) Config excludes API-key validation for chatgpt-oauth"
rg -nP "providersWithout(Api|API)Keys|providersWithoutApiKeys" scripts/modules/config-manager.js -n -C2
src/utils/json-extract.js (3)

23-29: Broaden code-fence unwrapping to common JSON-ish labels.

Covers jsonc/json5/js/javascript/ts/typescript fences to improve extraction robustness.

Apply:

-    // Remove markdown fences
-    .replace(/^.*?```(?:json)?\s*([\s\S]*?)\s*```.*$/i, '$1')
+    // Remove markdown fences (json/jsonc/json5/js/javascript/ts/typescript)
+    .replace(
+      /^.*?```(?:jsonc?|json5|js|javascript|ts|typescript)?\s*([\s\S]*?)\s*```.*$/i,
+      '$1'
+    )

4-7: Promote scan limit to a top-level constant for clarity and tuning.

Avoid scattering magic numbers and simplify later adjustments.

Apply:

 import { parse as parseJsonc } from 'jsonc-parser';
 import { jsonrepair } from 'jsonrepair';
 
+// Maximum characters to scan during boundary detection.
+// Keep conservative to balance performance and robustness.
+const DEFAULT_SCAN_LIMIT = 200_000;
@@
-	for (let i = 0; i < content.length && i < 10000; i++) {
+	const SCAN_LIMIT = Math.min(content.length, DEFAULT_SCAN_LIMIT);
+	for (let i = 0; i < SCAN_LIMIT; i++) {
 		const ch = content[i];

Also applies to: 73-79


65-99: Make boundary detection JSONC-aware (skip // and / / comments) and lift the scan limit.

Braces inside comments currently affect depth accounting; also, a 10k cap is tight for larger generations. Incorporate comment-state handling and use the promoted scan limit.

Apply:

 	// Single-pass boundary detection
 	const openChar = content[0];
 	const closeChar = openChar === '{' ? '}' : ']';
 	let depth = 0;
 	let inString = false;
 	let escapeNext = false;
-	let lastValidEnd = -1;
+	let inLineComment = false;
+	let inBlockComment = false;
 
-	for (let i = 0; i < content.length && i < 10000; i++) {
+	const SCAN_LIMIT = Math.min(content.length, DEFAULT_SCAN_LIMIT);
+	for (let i = 0; i < SCAN_LIMIT; i++) {
 		const ch = content[i];
+		const next = content[i + 1];
+
+		// Handle line/block comments when not inside strings
+		if (!inString) {
+			if (inLineComment) {
+				if (ch === '\n') inLineComment = false;
+				continue;
+			}
+			if (inBlockComment) {
+				if (ch === '*' && next === '/') {
+					inBlockComment = false;
+					i++; // skip '/'
+				}
+				continue;
+			}
+			if (ch === '/' && next === '/') {
+				inLineComment = true;
+				i++; // skip second '/'
+				continue;
+			}
+			if (ch === '/' && next === '*') {
+				inBlockComment = true;
+				i++; // skip '*'
+				continue;
+			}
+		}
 
 		if (escapeNext) {
 			escapeNext = false;
 			continue;
 		}
@@
 		if (ch === openChar) depth++;
 		else if (ch === closeChar) {
 			depth--;
 			if (depth === 0) {
-				lastValidEnd = i + 1;
-				const candidate = content.slice(0, lastValidEnd);
+				const candidate = content.slice(0, i + 1);
 				const parsed = tryParse(candidate);
 				if (parsed !== undefined) return parsed;
 			}
 		}
 	}
src/ai-providers/gemini-cli.js (1)

255-275: Preserve legacy behavior for incomplete JSON — verify tests lock this.

The guard that returns original text when no closing brace/bracket exists after the first opening maintains compatibility. Ensure unit tests assert this, along with successful repair paths.

Add/confirm tests similar to:

  • Input: "Here: {"a": 1" → extractJson returns original text.
  • Input: "Oops {a:1,}" (repairable) → extractJson returns valid JSON.
src/ai-providers/index.js (1)

19-19: Confirm wrapper exports exist in chatgpt-oauth.js (per provider contract).

Our conventions require top-level wrappers: generateChatGPTOAuthText, streamChatGPTOAuthText, generateChatGPTOAuthObject. If present, consider re-exporting them here for symmetry; otherwise, add them in chatgpt-oauth.js.

Run:

#!/bin/bash
set -euo pipefail

echo "Verify required wrapper exports"
rg -n -C1 -e 'export\s+async?\s*function\s+generateChatGPTOAuthText\(' \
            -e 'export\s+async?\s*function\s+streamChatGPTOAuthText\(' \
            -e 'export\s+async?\s*function\s+generateChatGPTOAuthObject\(' src/ai-providers/chatgpt-oauth.js
package.json (2)

83-93: Move ai-sdk-provider-chatgpt-oauth to optionalDependencies (lazy-loaded).

Since the SDK is dynamically imported at runtime by the provider, making it optional avoids install failures for users who don’t use ChatGPT OAuth, mirroring gemini-cli.

Apply:

   "dependencies": {
@@
-    "openai": "^4.89.0",
-    "ai-sdk-provider-chatgpt-oauth": "1.0.0-ai-sdk-v4",
+    "openai": "^4.89.0",
@@
   "optionalDependencies": {
     "@anthropic-ai/claude-code": "^1.0.88",
     "@biomejs/cli-linux-x64": "^1.9.4",
-    "ai-sdk-provider-gemini-cli": "^0.1.1"
+    "ai-sdk-provider-gemini-cli": "^0.1.1",
+    "ai-sdk-provider-chatgpt-oauth": "1.0.0-ai-sdk-v4"
   },

Also applies to: 89-93


83-83: Version/engine compatibility already looks fine.

Provider doesn’t declare a peer on ai and targets Node ≥18, which matches our engines. No changes required.

.changeset/add-chatgpt-oauth-provider.md (1)

5-5: Rewrite changeset summary to a single, user-facing imperative line (drop “feat:” prefix).

Keep it concise and in imperative mood for CHANGELOG. MD041 can be ignored for changeset files.

-feat: add ChatGPT OAuth provider (gpt‑5) via subscription (no API key). Wired into unified services with model `chatgpt-oauth/gpt-5` (main, fallback only), JSON object generation/validation, and optional reasoning controls (effort/summary). Docs included.
+Add ChatGPT OAuth provider (GPT‑5) via subscription (no API key); integrate with unified services (main/fallback), enable JSON object generation with validation, add optional reasoning controls, and include setup docs.
README.md (1)

78-81: Clarify “no API key” note and include ChatGPT OAuth in the exception; fix punctuation.

Use an em dash and explicitly list ChatGPT OAuth as a no-key path; remove extra spacing.

- - ChatGPT OAuth (no API key required - uses ChatGPT Plus/Pro/Teams subscription)
+ - ChatGPT OAuth (no API key required — uses ChatGPT Plus/Pro/Teams subscription)
@@
-Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code). Adding all API keys enables you to seamlessly switch between model providers at will.
+Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code or ChatGPT OAuth). Adding all API keys enables you to seamlessly switch between model providers at will.
scripts/modules/config-manager.js (1)

733-784: Short-circuit getMcpApiKeyStatus for chatgpt-oauth and gemini-cli to avoid false “missing key” warnings.

The switch lacks cases for these no-key providers, causing spurious warnings when checking .cursor/mcp.json.

       switch (providerName) {
         case 'anthropic':
           apiKeyToCheck = mcpEnv.ANTHROPIC_API_KEY;
           placeholderValue = 'YOUR_ANTHROPIC_API_KEY_HERE';
           break;
+        case 'gemini-cli':
+          return true; // No API key needed
+        case 'chatgpt-oauth':
+          return true; // OAuth via Codex; no API key in MCP env
         case 'openai':
           apiKeyToCheck = mcpEnv.OPENAI_API_KEY;
           placeholderValue = 'YOUR_OPENAI_API_KEY_HERE'; // Assuming placeholder matches OPENAI
           break;

Optional (to prevent drift with providersWithoutApiKeys): early-return at the top of this function after mcpEnv is loaded:

+    // Short-circuit for providers that don’t use API keys in MCP
+    if (providersWithoutApiKeys.includes(providerName?.toLowerCase())) {
+      return true;
+    }
docs/examples/chatgpt-oauth-usage.md (3)

21-25: Polish env-var list and clarify required vs optional.

Add a blank line before the list and mark required/optional explicitly.

-Alternatively, set environment variables:
-
-- `CHATGPT_OAUTH_ACCESS_TOKEN`
-- `CHATGPT_OAUTH_ACCOUNT_ID`
-- Optional: `CHATGPT_OAUTH_REFRESH_TOKEN`
+Alternatively, set these environment variables:
+
+- `CHATGPT_OAUTH_ACCESS_TOKEN` (required)
+- `CHATGPT_OAUTH_ACCOUNT_ID` (required)
+- `CHATGPT_OAUTH_REFRESH_TOKEN` (optional)

54-66: Normalize dash usage and surround lists with blank lines.

Use an em dash for inline asides and ensure MD032 compliance.

-The ChatGPT OAuth provider supports reasoning controls:
-
-- `reasoningEffort`: `"low" | "medium" | "high" | null` (null disables reasoning)
-- `reasoningSummary`: `"auto" | "none" | "concise" | "detailed" | null` (null omits summary)
+The ChatGPT OAuth provider supports reasoning controls:
+
+- `reasoningEffort`: `"low" | "medium" | "high" | null` (null disables reasoning)
+- `reasoningSummary`: `"auto" | "none" | "concise" | "detailed" | null` (null omits summary)
@@
-### Unsupported Parameters
-
-The ChatGPT backend does **not** support these common parameters:
-- `maxTokens` - The backend uses its own internal limits
-- `temperature` - The backend uses its own internal settings
+### Unsupported Parameters
+
+The ChatGPT backend does **not** support these common parameters:
+
+- `maxTokens` — the backend uses its own internal limits
+- `temperature` — the backend uses its own internal settings
@@
-These parameters will be automatically excluded from your config when using ChatGPT OAuth.
+These parameters will be automatically excluded from your config when using ChatGPT OAuth.

67-84: Add a prominent note about model ID collisions and tighten the example.

Helps users avoid picking OpenAI’s provider when they intend ChatGPT OAuth for “gpt-5”.

 ### Usage Restrictions
 
 GPT-5 via ChatGPT OAuth does not support live browsing/retrieval. Do not use it as your "research" role. Configure it for main/fallback roles only.
 
+> Note (model ID collision):
+> If both OpenAI and ChatGPT OAuth expose the same `modelId` (e.g., `"gpt-5"`), `task-master models --set-main gpt-5` may select the OpenAI provider by default. To force ChatGPT OAuth, either run with `--chatgpt-oauth` or set `"provider": "chatgpt-oauth"` explicitly in your config.
+
 Example with custom reasoning settings:
 ```jsonc
 {
   "models": {
     "main": {
       "provider": "chatgpt-oauth",
       "modelId": "gpt-5",
       "reasoningEffort": "high",
       "reasoningSummary": "detailed"
     }
   }
 }

</blockquote></details>
<details>
<summary>tests/unit/ai-services-unified.test.js (3)</summary><blockquote>

`229-236`: **Return null API key name for a no-key provider (ChatGPT OAuth).**

For no-key providers we standardize on getRequiredApiKeyName() → null. This also avoids future code trying to resolve a phantom env var.




Apply:

```diff
- ChatGPTOAuthProvider: jest.fn(() => ({
-   generateText: jest.fn(),
-   streamText: jest.fn(),
-   generateObject: jest.fn(),
-   getRequiredApiKeyName: jest.fn(() => 'CHATGPT_OAUTH_ACCESS_TOKEN'),
-   isRequiredApiKey: jest.fn(() => false)
- }))
+ const mockChatGPTOAuthProvider = {
+   generateText: jest.fn(),
+   streamText: jest.fn(),
+   generateObject: jest.fn(),
+   getRequiredApiKeyName: jest.fn(() => null),
+   isRequiredApiKey: jest.fn(() => false)
+ };
+ ChatGPTOAuthProvider: jest.fn(() => mockChatGPTOAuthProvider)

813-832: Fix test expectation: no API key name for ChatGPT OAuth provider.

The provider does not require an API key and should return null for the key name.

Apply:

   // Verify it doesn't require an API key
   expect(chatgptOAuthProvider.isRequiredApiKey()).toBe(false);

-  // Verify it returns a key name for display purposes
-  expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBe(
-    'CHATGPT_OAUTH_ACCESS_TOKEN'
-  );
+  // No API key name for no-key providers
+  expect(chatgptOAuthProvider.getRequiredApiKeyName()).toBeNull();

834-850: Actually assert provider-specific param handling for ChatGPT OAuth.

Current test only checks method existence. Add a focused test that:

  • Sets main provider/model to chatgpt-oauth/gpt-5,
  • Calls generateTextService with parameters that normally include maxTokens and temperature,
  • Asserts the provider’s generateText was called without those unsupported params and with reasoning defaults.

Apply:

 test('should handle ChatGPT OAuth provider with reasoning parameters', async () => {
-  // This test verifies that the ChatGPT OAuth provider mock is set up correctly
-  // in the test environment and would forward reasoning parameters
-
-  // The mock is already configured in beforeEach
-  const chatgptOAuthProvider = new (
-    await import('../../src/ai-providers/index.js')
-  ).ChatGPTOAuthProvider();
-
-  // Verify the provider exists and has the expected methods
-  expect(chatgptOAuthProvider.generateText).toBeDefined();
-  expect(chatgptOAuthProvider.isRequiredApiKey).toBeDefined();
-  expect(chatgptOAuthProvider.getRequiredApiKeyName).toBeDefined();
-
-  // Verify it's configured as a no-API-key provider
-  expect(chatgptOAuthProvider.isRequiredApiKey()).toBe(false);
+  // Route main role to ChatGPT OAuth
+  mockGetMainProvider.mockReturnValue('chatgpt-oauth');
+  mockGetMainModelId.mockReturnValue('gpt-5');
+  // Ensure role params include knobs that must be stripped
+  mockGetParametersForRole.mockReturnValue({ maxTokens: 222, temperature: 0.7 });
+  // Stub provider call
+  const { ChatGPTOAuthProvider } = await import('../../src/ai-providers/index.js');
+  const chatgptOAuthProvider = new ChatGPTOAuthProvider();
+  chatgptOAuthProvider.generateText.mockResolvedValue({
+    text: 'ok',
+    usage: { inputTokens: 1, outputTokens: 1, totalTokens: 2 }
+  });
+
+  const res = await generateTextService({ role: 'main', prompt: 'Hello', session: { env: {} } });
+  expect(res.mainResult).toBe('ok');
+  expect(chatgptOAuthProvider.generateText).toHaveBeenCalledTimes(1);
+  const callArgs = chatgptOAuthProvider.generateText.mock.calls[0][0];
+  // Unsupported params must be absent
+  expect(callArgs).not.toHaveProperty('maxTokens');
+  expect(callArgs).not.toHaveProperty('temperature');
+  // Reasoning defaults should be present (either top-level or nested providerOptions)
+  const opts = callArgs.providerOptions || callArgs;
+  expect(opts.reasoningEffort).toBe('medium');
+  expect(opts.reasoningSummary).toBe('auto');
 });

Note: If ai-services-unified nests provider options differently, adjust opts selection accordingly.

tests/unit/utils/json-extract.test.js (2)

11-16: Remove async dynamic import; use a static import so tests remain synchronous.

No mocking is needed here; static import keeps tests deterministic and aligns with our “no async in tests” guideline.

Apply:

-import { jest } from '@jest/globals';
-
-describe('extractJsonTolerant', () => {
-  let extractJsonTolerant;
-
-  beforeEach(async () => {
-    // Reset modules before each test
-    jest.resetModules();
-    const module = await import('../../../src/utils/json-extract.js');
-    extractJsonTolerant = module.extractJsonTolerant;
-  });
+import { jest } from '@jest/globals';
+import { extractJsonTolerant } from '../../../src/utils/json-extract.js';
+
+describe('extractJsonTolerant', () => {

90-111: Add coverage for JSONC comments (// and / /) to lock behavior.

You already test trailing commas; add comment-removal cases to guard against regressions.

Apply:

 describe('JSONC features (comments and trailing commas)', () => {
   it('should handle trailing commas', () => {
@@
   });
+
+  it('should handle line comments (//)', () => {
+    const input = '{\n  "a": 1, // trailing info\n  "b": 2\n}';
+    const result = extractJsonTolerant(input);
+    const parsed = JSON.parse(result);
+    expect(parsed).toEqual({ a: 1, b: 2 });
+  });
+
+  it('should handle block comments (/* */)', () => {
+    const input = '/* header */\n{\n  "a": 1,\n  /* mid */ "b": 2\n} /* tail */';
+    const result = extractJsonTolerant(input);
+    const parsed = JSON.parse(result);
+    expect(parsed).toEqual({ a: 1, b: 2 });
+  });
scripts/modules/task-manager/models.js (1)

605-623: Centralize provider-specific parameter rules (strip unsupported, apply defaults).

Good behavior for ChatGPT OAuth (remove maxTokens/temperature; add reasoning defaults). To avoid scattering as more providers require special handling, move this into a helper like applyProviderParamRules(modelCfg, provider, modelData).

Apply:

-    // Handle provider-specific parameter configuration
-    if (determinedProvider === CUSTOM_PROVIDERS.CHATGPT_OAUTH) {
-      // ChatGPT OAuth doesn't support maxTokens or temperature - remove them
-      delete currentConfig.models[role].maxTokens;
-      delete currentConfig.models[role].temperature;
-
-      // Add supported reasoning controls with defaults if not already present
-      if (typeof currentConfig.models[role].reasoningEffort === 'undefined') {
-        currentConfig.models[role].reasoningEffort = 'medium'; // provider default
-      }
-      if (typeof currentConfig.models[role].reasoningSummary === 'undefined') {
-        currentConfig.models[role].reasoningSummary = 'auto'; // provider default
-      }
-    } else {
-      // For other providers, handle maxTokens from supported-models.json
-      if (modelData && modelData.max_tokens) {
-        currentConfig.models[role].maxTokens = modelData.max_tokens;
-      }
-    }
+    // Handle provider-specific parameter configuration
+    applyProviderParamRules(currentConfig.models[role] || (currentConfig.models[role] = {}), determinedProvider, modelData);

Outside this hunk (helper suggestion):

// Place near the top of this module
function applyProviderParamRules(target, provider, modelData) {
  switch (provider) {
    case CUSTOM_PROVIDERS.CHATGPT_OAUTH: {
      delete target.maxTokens;
      delete target.temperature;
      if (typeof target.reasoningEffort === 'undefined') target.reasoningEffort = 'medium';
      if (typeof target.reasoningSummary === 'undefined') target.reasoningSummary = 'auto';
      break;
    }
    default: {
      if (modelData && modelData.max_tokens) target.maxTokens = modelData.max_tokens;
    }
  }
}
src/ai-providers/chatgpt-oauth.js (7)

90-94: Suppress token/temperature controls for ChatGPT OAuth at the source

Comment says “keep default behavior,” but BaseAIProvider forwards maxTokens and temperature. ChatGPT OAuth can 4xx on unknown fields. Override prepareTokenParam to drop them universally.

Apply this diff:

-/**
- * GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
- * If needed in the future, we can override prepareTokenParam to suppress it.
- */
+/**
+ * ChatGPT OAuth rejects/ignores token & temperature controls; suppress them.
+ */
+prepareTokenParam() {
+  return {};
+}

117-124: Strip unsupported params before delegating to BaseAIProvider.generateText

Without stripping, BaseAIProvider will forward temperature and token params. Remove them here and keep demotion.

Apply this diff:

 ChatGPTOAuthProvider.prototype.generateText = async function (params) {
-  const processed = {
-    ...params,
-    messages: this._demoteSystemToUser(params.messages)
-  };
+  const { maxTokens, temperature, ...rest } = params ?? {};
+  const processed = {
+    ...rest,
+    messages: this._demoteSystemToUser(rest.messages)
+  };
   return BaseAIProvider.prototype.generateText.call(this, processed);
 };

126-132: Strip unsupported params before delegating to BaseAIProvider.streamText

Same issue as generateText; remove temperature/maxTokens prior to delegation.

Apply this diff:

 ChatGPTOAuthProvider.prototype.streamText = async function (params) {
-  const processed = {
-    ...params,
-    messages: this._demoteSystemToUser(params.messages)
-  };
+  const { maxTokens, temperature, ...rest } = params ?? {};
+  const processed = {
+    ...rest,
+    messages: this._demoteSystemToUser(rest.messages)
+  };
   return BaseAIProvider.prototype.streamText.call(this, processed);
 };

146-150: Use objectName in JSON enforcement or drop it from validation

objectName is validated but unused. Interpolating it improves adherence.

Apply this diff:

-    const jsonEnforcement =
-      'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
+    const jsonEnforcement = `CRITICAL: Respond with ONLY valid JSON for the "${params.objectName}" object. No prose, markdown, or code fences. The first char must be { or [ and the last must be } or ]. Return exactly the requested JSON object.`;

151-157: Do not forward maxTokens/temperature in ChatGPT OAuth object generation

Forwarding these increases risk of provider 4xxs and is inconsistent with your own filtering goals.

Apply this diff:

     const result = await generateText({
       model: client(params.modelId),
-      messages,
-      maxTokens: params.maxTokens,
-      temperature: params.temperature
+      messages
     });

172-179: Normalize usage keys and compute total when absent

AI SDK usage keys vary (input/output vs. prompt/completion). Normalize to avoid undefineds and keep totals accurate.

Apply this diff:

-    return {
-      object: validated,
-      usage: {
-        inputTokens: result.usage?.promptTokens,
-        outputTokens: result.usage?.completionTokens,
-        totalTokens: result.usage?.totalTokens
-      }
-    };
+    const inputTokens =
+      result.usage?.inputTokens ?? result.usage?.promptTokens ?? 0;
+    const outputTokens =
+      result.usage?.outputTokens ?? result.usage?.completionTokens ?? 0;
+    const totalTokens =
+      result.usage?.totalTokens ??
+      (Number.isFinite(inputTokens) && Number.isFinite(outputTokens)
+        ? inputTokens + outputTokens
+        : undefined);
+    return {
+      object: validated,
+      usage: { inputTokens, outputTokens, totalTokens }
+    };

134-183: Optional: switch to ai.generateObject for schema-guided prompting (with fallback)

Using ai.generateObject lets the model see the Zod schema, improving adherence. If manual tests show incompatibility with ChatGPT OAuth, keep your current tolerant-extraction path but retain the unsupported-param suppression above.

If adopting, update imports and replace the generateText call with ai.generateObject, similar to BaseAIProvider.generateObject, and keep your JSON-repair fallback only if needed.

Comment on lines +647 to +654
// Pass through optional provider-specific options for chatgpt-oauth
// If present in roleConfig, they will be forwarded to the provider's getClient.
...(roleConfig?.reasoningEffort !== undefined && {
reasoningEffort: roleConfig.reasoningEffort
}),
...(roleConfig?.reasoningSummary !== undefined && {
reasoningSummary: roleConfig.reasoningSummary
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Reasoning options are never forwarded; roleConfig lacks these fields. Also exclude unsupported params for ChatGPT OAuth.

current issue

  • roleConfig from _getRoleConfiguration contains only { provider, modelId }, so the spreads for reasoningEffort/reasoningSummary are always skipped.
  • ChatGPT OAuth doesn’t support maxTokens/temperature; we should omit them to match docs and PR objectives.

fix

  • Import and use getModelConfigForRole(currentRole, effectiveProjectRoot) to read reasoning options.
  • After building callParams, delete maxTokens/temperature when providerName === 'chatgpt-oauth'.
-        ...(roleConfig?.reasoningEffort !== undefined && {
-          reasoningEffort: roleConfig.reasoningEffort
-        }),
-        ...(roleConfig?.reasoningSummary !== undefined && {
-          reasoningSummary: roleConfig.reasoningSummary
-        }),
+        ...(getModelConfigForRole(currentRole, effectiveProjectRoot)?.reasoningEffort !== undefined && {
+          reasoningEffort: getModelConfigForRole(currentRole, effectiveProjectRoot).reasoningEffort
+        }),
+        ...(getModelConfigForRole(currentRole, effectiveProjectRoot)?.reasoningSummary !== undefined && {
+          reasoningSummary: getModelConfigForRole(currentRole, effectiveProjectRoot).reasoningSummary
+        }),

Add this small post-construction guard (outside the selected range, immediately after const callParams = { ... }):

// Exclude unsupported parameters for ChatGPT OAuth
if (providerName?.toLowerCase() === 'chatgpt-oauth') {
  delete callParams.maxTokens;
  delete callParams.temperature;
}

And import the missing getter at the top (outside this hunk):

 import {
   MODEL_MAP,
   getAzureBaseURL,
   getBaseUrlForRole,
   getBedrockBaseURL,
   getDebugFlag,
   getFallbackModelId,
   getFallbackProvider,
   getMainModelId,
   getMainProvider,
   getOllamaBaseURL,
   getParametersForRole,
+  getModelConfigForRole,
   getResearchModelId,
   getResearchProvider,
   getResponseLanguage,
   getUserId,
   getVertexLocation,
   getVertexProjectId,
   isApiKeySet,
   providersWithoutApiKeys
 } from './config-manager.js';

Note: ensure getModelConfigForRole is exported from config-manager.js (see companion comment there).

Comment on lines +109 to +122
"chatgpt-oauth": [
{
"id": "gpt-5",
"swe_score": 0.749,
"cost_per_1m_tokens": {
"input": 0,
"output": 0,
"currency": "USD"
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"supported": true
}
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add a human-readable name and check cost format consistency.

  • The provider block is correct, but per our guideline for supported-models.json, each model object should include a name field. Add a readable name (e.g., "GPT-5") to align with other entries and downstream UIs.
  • Minor consistency nit: only this entry includes currency in cost_per_1m_tokens; most other entries omit it. Either remove currency here or standardize across the file to avoid mixed shapes.

Apply:

   "chatgpt-oauth": [
     {
       "id": "gpt-5",
+      "name": "GPT-5",
       "swe_score": 0.749,
       "cost_per_1m_tokens": {
         "input": 0,
-        "output": 0,
-        "currency": "USD"
+        "output": 0
       },
       "allowed_roles": ["main", "fallback"],
       "max_tokens": 100000,
       "supported": true
     }
   ],
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"chatgpt-oauth": [
{
"id": "gpt-5",
"swe_score": 0.749,
"cost_per_1m_tokens": {
"input": 0,
"output": 0,
"currency": "USD"
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"supported": true
}
],
"chatgpt-oauth": [
{
"id": "gpt-5",
"name": "GPT-5",
"swe_score": 0.749,
"cost_per_1m_tokens": {
"input": 0,
"output": 0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"supported": true
}
],
🤖 Prompt for AI Agents
In scripts/modules/supported-models.json around lines 109 to 122, add a
human-readable name field to the model object (e.g., "name": "GPT-5") and make
the cost_per_1m_tokens shape consistent with the rest of the file by removing
the "currency" key (or alternatively add "currency" to all other entries if you
prefer global standardization); ensure the JSON object remains valid (commas and
formatting) after the change.

💡 Verification agent

🧩 Analysis chain

Flag potential ambiguity: duplicate model id "gpt-5" across providers.

You now have gpt-5 under both chatgpt-oauth and openai. Code paths like setModel() first do availableModels.find((m) => m.id === modelId), which will match whichever provider appears first in this JSON. That can silently select the wrong provider if a caller passes only the model id with no provider hint.

Recommendation:

  • In setModel(), detect duplicate model ids across providers and require a providerHint (fail fast with an actionable error), or choose a deterministic default (e.g., prefer currently active provider family). I’ve added a concrete refactor suggestion in models.js to enforce this.

🏁 Script executed:

#!/bin/bash
# Check for duplicate model ids across providers
echo "Duplicate model ids across providers:"
python3 - << 'EOF'
import json
from collections import defaultdict

data = json.load(open('scripts/modules/supported-models.json'))
id_map = defaultdict(set)
for provider, models in data.items():
    for m in models:
        id_map[m['id']].add(provider)
for model_id, providers in id_map.items():
    if len(providers) > 1:
        print(model_id, "->", ", ".join(sorted(providers)))
EOF

# Search for model selection code in JavaScript
echo "Search for model selection code (find by id):"
rg -nP --type js 'availableModels\.find\(\s*\(m\)\s*=>\s*m\.id\s*===\s*modelId' -n

Length of output: 584


Enforce unique model selection by ID or require explicit provider hint

There are duplicate model IDs defined in scripts/modules/supported-models.json (e.g. gpt-5, gpt-4o, gpt-4o-mini), and the current lookup in scripts/modules/task-manager/models.js simply does:

  • Line 426:
    let modelData = availableModels.find((m) => m.id === modelId);

This will silently pick the first provider’s entry if only modelId is passed, risking the wrong provider being selected.

Necessary changes:

  • In scripts/modules/task-manager/models.js around line 420, replace the single‐match lookup with logic that:
    1. Gathers all models matching the given modelId.
    2. If more than one match is found and no providerHint is supplied, throw an error indicating ambiguity.
    3. Otherwise, select the correct model (by providerHint or the sole match).
  • Update the CLI integration tests in tests/integration/cli/commands.test.js (line 92) to include a providerHint when invoking setModel(), or to expect an error when it’s omitted for duplicate IDs.

Example refactor in models.js:

-  let modelData = availableModels.find((m) => m.id === modelId);
+  const matches = availableModels.filter((m) => m.id === modelId);
+  if (matches.length > 1 && !providerHint) {
+    throw new Error(
+      `Ambiguous model id “${modelId}” found in providers [${matches
+        .map(m => m.provider)
+        .join(', ')}]. Please specify a providerHint.`
+    );
+  }
+  const modelData = providerHint
+    ? matches.find((m) => m.provider === providerHint)
+    : matches[0];

This ensures deterministic behavior and fails fast on ambiguity.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In scripts/modules/task-manager/models.js around lines 420–430 and
tests/integration/cli/commands.test.js around line 92, the current lookup uses
availableModels.find(m => m.id === modelId) which silently picks the first match
when duplicate model IDs exist; change the lookup to gather all matches for
modelId, if matches.length > 1 and providerHint is not provided throw a clear
ambiguity error, otherwise select the sole match or the one matching
providerHint; then update the CLI integration test at
tests/integration/cli/commands.test.js (line 92) to either pass a providerHint
when calling setModel() for a duplicate ID or assert that an ambiguity error is
thrown when providerHint is omitted.

Comment on lines +542 to +557
} else if (providerHint === CUSTOM_PROVIDERS.CHATGPT_OAUTH) {
// ChatGPT OAuth provider - check if model exists in our list
determinedProvider = CUSTOM_PROVIDERS.CHATGPT_OAUTH;
const chatgptOAuthModels = availableModels.filter(
(m) => m.provider === 'chatgpt-oauth'
);
const chatgptOAuthModelData = chatgptOAuthModels.find(
(m) => m.id === modelId
);
if (chatgptOAuthModelData) {
modelData = chatgptOAuthModelData;
report('info', `Setting ChatGPT OAuth model '${modelId}'.`);
} else {
warningMessage = `Warning: ChatGPT OAuth model '${modelId}' not found in supported models. Setting without validation.`;
report('warn', warningMessage);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Handle duplicate model ids deterministically when no providerHint is supplied.

Right now we do availableModels.find((m) => m.id === modelId) when no hint is provided. With gpt-5 present under both openai and chatgpt-oauth, which one “wins” depends on JSON order. Make ambiguity explicit and actionable.

Apply:

-      } else {
-        // Model not found and no provider hint was given
-        return {
-          success: false,
-          error: {
-            code: 'MODEL_NOT_FOUND_NO_HINT',
-            message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, or --vertex.`
-          }
-        };
-      }
+      } else {
+        // Model not found internally. Before erroring, check for duplicates to guide the user.
+        const sameIdAcross = availableModels.filter((m) => m.id === modelId);
+        if (sameIdAcross.length > 1) {
+          return {
+            success: false,
+            error: {
+              code: 'AMBIGUOUS_MODEL_ID',
+              message: `Model ID "${modelId}" exists under multiple providers: ${sameIdAcross
+                .map((m) => m.provider)
+                .join(', ')}. Please pass a provider flag (e.g., --chatgpt-oauth, --openrouter, --ollama, --bedrock, --azure, --vertex).`
+            }
+          };
+        }
+        return {
+          success: false,
+          error: {
+            code: 'MODEL_NOT_FOUND_NO_HINT',
+            message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, specify a provider flag (e.g., --chatgpt-oauth, --openrouter, --ollama, --bedrock, --azure, --vertex).`
+          }
+        };
+      }

🏁 Script executed:

#!/bin/bash
# Locate occurrences where modelId is matched without providerHint
rg -nP --context 3 "m\.id === modelId" scripts/modules/task-manager/models.js

# Display the surrounding logic to inspect the no-hint branch handling
sed -n '530,600p' scripts/modules/task-manager/models.js

Length of output: 3905


Explicitly handle ambiguous model IDs when no provider hint is supplied

In the no-hint branch of scripts/modules/task-manager/models.js, the current use of
availableModels.find((m) => m.id === modelId) can silently pick one provider when multiple exist. We should first detect duplicates and return a clear AMBIGUOUS_MODEL_ID error before falling back to the not-found case.

Pinpoint location:

  • File: scripts/modules/task-manager/models.js
  • In the else block under “// No hint provided (flags not used)”, replace the existing return.

Apply this diff:

       } else {
-        // Model not found and no provider hint was given
-        return {
-          success: false,
-          error: {
-            code: 'MODEL_NOT_FOUND_NO_HINT',
-            message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, or --vertex.`
-          }
-        };
+        // No provider hint: detect ambiguous IDs or missing model
+        const candidates = availableModels.filter((m) => m.id === modelId);
+        if (candidates.length > 1) {
+          const providers = candidates.map((m) => m.provider).join(', ');
+          return {
+            success: false,
+            error: {
+              code: 'AMBIGUOUS_MODEL_ID',
+              message: `Model ID "${modelId}" exists under multiple providers: ${providers}. Please specify a provider flag (e.g., --chatgpt-oauth, --openrouter, --ollama, --bedrock, --azure, --vertex).`
+            }
+          };
+        }
+        // Single candidate or none: fallback to not-found error
+        return {
+          success: false,
+          error: {
+            code: 'MODEL_NOT_FOUND_NO_HINT',
+            message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, specify a provider flag (e.g., --chatgpt-oauth, --openrouter, --ollama, --bedrock, --azure, --vertex).`
+          }
+        };
       }

This ensures determinism and provides actionable feedback when the same model ID appears across providers.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else if (providerHint === CUSTOM_PROVIDERS.CHATGPT_OAUTH) {
// ChatGPT OAuth provider - check if model exists in our list
determinedProvider = CUSTOM_PROVIDERS.CHATGPT_OAUTH;
const chatgptOAuthModels = availableModels.filter(
(m) => m.provider === 'chatgpt-oauth'
);
const chatgptOAuthModelData = chatgptOAuthModels.find(
(m) => m.id === modelId
);
if (chatgptOAuthModelData) {
modelData = chatgptOAuthModelData;
report('info', `Setting ChatGPT OAuth model '${modelId}'.`);
} else {
warningMessage = `Warning: ChatGPT OAuth model '${modelId}' not found in supported models. Setting without validation.`;
report('warn', warningMessage);
}
} else {
- // Model not found and no provider hint was given
- return {
- success: false,
- error: {
- code: 'MODEL_NOT_FOUND_NO_HINT',
- message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, please specify the provider using --openrouter, --ollama, --bedrock, --azure, or --vertex.`
- }
// No provider hint: detect ambiguous IDs or missing model
const candidates = availableModels.filter((m) => m.id === modelId);
if (candidates.length > 1) {
const providers = candidates.map((m) => m.provider).join(', ');
return {
success: false,
error: {
code: 'AMBIGUOUS_MODEL_ID',
message: `Model ID "${modelId}" exists under multiple providers: ${providers}. Please specify a provider flag (e.g., --chatgpt-oauth, --openrouter, --ollama, --bedrock, --azure, --vertex).`
}
};
}
// Single candidate or none: fallback to not-found error
return {
success: false,
error: {
code: 'MODEL_NOT_FOUND_NO_HINT',
message: `Model ID "${modelId}" not found in Taskmaster's supported models. If this is a custom model, specify a provider flag (e.g., --chatgpt-oauth, --openrouter, --ollama, --bedrock, --azure, --vertex).`
}
};
}

Comment on lines +1 to +183
/**
* src/ai-providers/chatgpt-oauth.js
*
* Implementation for interacting with OpenAI GPT-5 via ChatGPT OAuth
* using the ai-sdk-provider-chatgpt-oauth package (AI SDK v4 build/tag).
*/

import { BaseAIProvider } from './base-provider.js';
import { log } from '../../scripts/modules/utils.js';
import { generateText } from 'ai';
import { extractJsonTolerant } from '../utils/json-extract.js';

let createChatGPTOAuth;

async function loadChatGptOAuthModule() {
if (!createChatGPTOAuth) {
try {
const mod = await import('ai-sdk-provider-chatgpt-oauth');
createChatGPTOAuth = mod.createChatGPTOAuth || mod.chatgptOAuth;
if (!createChatGPTOAuth) {
throw new Error('createChatGPTOAuth export not found');
}
} catch (err) {
throw new Error(
"ChatGPT OAuth SDK is not installed. Please install 'ai-sdk-provider-chatgpt-oauth@ai-sdk-v4' to use the chatgpt-oauth provider."
);
}
}
}

export class ChatGPTOAuthProvider extends BaseAIProvider {
constructor() {
super();
this.name = 'ChatGPT OAuth';
}

/**
* ChatGPT OAuth does not require a traditional API key; it uses OAuth tokens.
* We return a descriptive env var name for visibility but do not require it.
*/
getRequiredApiKeyName() {
return 'CHATGPT_OAUTH_ACCESS_TOKEN';
}

isRequiredApiKey() {
return false;
}

/**
* Override validateAuth to skip API key validation.
*/
validateAuth(_params) {
// Auth handled internally by the SDK (reads ~/.codex/auth.json or env vars)
}

/**
* Creates and returns a ChatGPT OAuth client instance.
* @param {object} params - Parameters for client initialization
* @param {string} [params.baseURL] - Optional custom API endpoint
* @returns {Promise<Function>} ChatGPT OAuth client function
*/
async getClient(params) {
try {
await loadChatGptOAuthModule();

const options = {};
if (params?.baseURL) {
options.baseURL = params.baseURL;
}

// Optional reasoning controls (matches provider API):
// - reasoningEffort: 'low' | 'medium' | 'high' | null (disable)
// - reasoningSummary: 'auto' | 'none' | 'concise' | 'detailed' | null (omit)
if (typeof params?.reasoningEffort !== 'undefined') {
options.reasoningEffort = params.reasoningEffort;
}
if (typeof params?.reasoningSummary !== 'undefined') {
options.reasoningSummary = params.reasoningSummary;
}

// The provider will source credentials automatically from ~/.codex/auth.json
// or environment variables (CHATGPT_OAUTH_ACCESS_TOKEN, CHATGPT_OAUTH_ACCOUNT_ID, etc.).
const provider = createChatGPTOAuth(options);
return provider;
} catch (error) {
this.handleError('client initialization', error);
}
}

/**
* GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
* If needed in the future, we can override prepareTokenParam to suppress it.
*/
}

// --- Provider-specific message handling ---
// The ChatGPT backend expects a very specific `instructions` string (from codex-instructions.txt).
// Appending arbitrary system prompts to instructions can cause 400 "Instructions are not valid".
// To stay compatible, we demote system messages to a regular user message and keep
// the special instructions intact.

ChatGPTOAuthProvider.prototype._demoteSystemToUser = function (messages) {
if (!Array.isArray(messages) || messages.length === 0) return messages || [];
const systemParts = messages.filter(
(m) =>
m.role === 'system' &&
typeof m.content === 'string' &&
m.content.trim() !== ''
);
const nonSystem = messages.filter((m) => m.role !== 'system');
if (systemParts.length === 0) return nonSystem;
const combined = systemParts.map((m) => m.content).join('\n\n');
// Prepend a user message carrying the previous system content
return [{ role: 'user', content: combined }, ...nonSystem];
};

// Override base calls to pre-process messages
ChatGPTOAuthProvider.prototype.generateText = async function (params) {
const processed = {
...params,
messages: this._demoteSystemToUser(params.messages)
};
return BaseAIProvider.prototype.generateText.call(this, processed);
};

ChatGPTOAuthProvider.prototype.streamText = async function (params) {
const processed = {
...params,
messages: this._demoteSystemToUser(params.messages)
};
return BaseAIProvider.prototype.streamText.call(this, processed);
};

ChatGPTOAuthProvider.prototype.generateObject = async function (params) {
try {
this.validateParams(params);
this.validateMessages(params.messages);
if (!params.schema)
throw new Error('Schema is required for object generation');
if (!params.objectName)
throw new Error('Object name is required for object generation');

// Demote system prompts to user to avoid contaminating ChatGPT OAuth instructions
const demoted = this._demoteSystemToUser(params.messages);

// Prepend strict JSON enforcement as a user message
const jsonEnforcement =
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
const messages = [{ role: 'user', content: jsonEnforcement }, ...demoted];

const client = await this.getClient(params);
const result = await generateText({
model: client(params.modelId),
messages,
maxTokens: params.maxTokens,
temperature: params.temperature
});

const jsonText = extractJsonTolerant(result.text || '');
let parsed;
try {
parsed = JSON.parse(jsonText);
} catch (e) {
throw new Error(
`Failed to parse JSON from ChatGPT OAuth response: ${e.message}`
);
}

// Validate against provided Zod schema
const validated = params.schema.parse(parsed);

return {
object: validated,
usage: {
inputTokens: result.usage?.promptTokens,
outputTokens: result.usage?.completionTokens,
totalTokens: result.usage?.totalTokens
}
};
} catch (error) {
this.handleError('object generation', error);
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Unit tests: add coverage for param stripping and system demotion

Ensure we never pass maxTokens/temperature to the SDK and that system demotion works as intended. Mock the SDK factory and assert call arguments.

Example test checklist:

  • generateText strips unsupported params and demotes system→user.
  • streamText strips unsupported params and demotes.
  • generateObject strips unsupported params and validates schema.
  • getClient forwards reasoning options and baseURL.
    Shell to scaffold/find tests:

🏁 Script executed:

#!/bin/bash
set -e
# Expect a new test file for this provider
fd -t f 'chatgpt-oauth.*test.js' tests || true

# Grep for assertions on param stripping
rg -nP "maxTokens|temperature" tests/unit/ai-providers -C3 || true

Length of output: 12077


Missing unit tests for ChatGPTOAuthProvider

The chatgpt-oauth provider currently lacks any dedicated test suite. We need to add a new file under tests/unit/ai-providers/ to cover its core behaviors, including parameter stripping, system→user demotion, object generation, and getClient option forwarding.

Please add tests/unit/ai-providers/chatgpt-oauth.test.js with at least the following checks:

  • generateText
    • System messages are demoted to a single user message.
    • No unsupported params (maxTokens, temperature) are passed to the SDK.
  • streamText
    • Same demotion and param-stripping behavior as generateText.
  • generateObject
    • Requires schema and objectName; throws when missing.
    • Prepends strict-JSON enforcement message.
    • Parses and validates JSON via the provided Zod schema.
  • getClient
    • Forwards optional baseURL, reasoningEffort, and reasoningSummary fields to createChatGPTOAuth.
    • Returns a client function that invokes the SDK factory correctly.

Example scaffolding commands:

# Create the test file
mkdir -p tests/unit/ai-providers
touch tests/unit/ai-providers/chatgpt-oauth.test.js

# Verify no existing tests reference chatgpt-oauth
rg -l 'chatgpt-oauth' tests/unit/ai-providers || echo "No tests found for chatgpt-oauth"

Let me know if you’d like help bootstrapping the basic test structure.

🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js lines 1-183: add a new unit test file
tests/unit/ai-providers/chatgpt-oauth.test.js that covers core behaviors: ensure
generateText and streamText demote system messages into one user message and do
not forward unsupported params like maxTokens/temperature to the SDK (mock the
SDK factory and assert call args); test generateObject throws when schema or
objectName are missing, prepends the strict JSON enforcement message, and
parses+validates returned JSON using a simple Zod schema (mock generateText to
return JSON text); and test getClient forwards baseURL, reasoningEffort,
reasoningSummary to createChatGPTOAuth and returns a client function that calls
the SDK factory correctly (mock import/createChatGPTOAuth). Use Jest with clear
mocks and assertions for call arguments and thrown errors.

💡 Verification agent

🧩 Analysis chain

Exports required by local guidelines: provide thin wrapper functions around the class

Repo appears to use class-based providers, but coding guidelines also expect generateText/stream/object exports. Add thin wrappers to satisfy both. If the registry already expects classes only, confirm and skip.

Proposed additions (outside the shown ranges):

// Singleton + functional wrappers to satisfy src/ai-providers/*.js export expectations
const __chatgptOAuthProvider = new ChatGPTOAuthProvider();

export const generateChatGPTOAuthText = (params) =>
  __chatgptOAuthProvider.generateText(params);

export const streamChatGPTOAuthText = (params) =>
  __chatgptOAuthProvider.streamText(params);

export const generateChatGPTOAuthObject = (params) =>
  __chatgptOAuthProvider.generateObject(params);

Run this verification script to confirm the repo expects functions or classes for provider registration:


🏁 Script executed:

#!/bin/bash
set -e
# Inspect provider registry/usage
rg -nP "new\s+ChatGPTOAuthProvider|generateChatGPTOAuth(Text|Object)|streamChatGPTOAuthText" -C3 --type=js
rg -nP "registerProvider|providers\s*=\s*{|\bchatgpt-oauth\b" -C3 --type=js

Length of output: 25556


Add wrapper exports for ChatGPTOAuthProvider
The chatgpt-oauth.js module currently defines only the ChatGPTOAuthProvider class, but per our provider guidelines it must also export the three thin‐wrapper functions: generateChatGPTOAuthText, streamChatGPTOAuthText, and generateChatGPTOAuthObject. Please add the following at the end of src/ai-providers/chatgpt-oauth.js:

+ // Provider wrapper exports
+ const __chatgptOAuthProvider = new ChatGPTOAuthProvider();
+
+ export const generateChatGPTOAuthText = (params) =>
+   __chatgptOAuthProvider.generateText(params);
+
+ export const streamChatGPTOAuthText = (params) =>
+   __chatgptOAuthProvider.streamText(params);
+
+ export const generateChatGPTOAuthObject = (params) =>
+   __chatgptOAuthProvider.generateObject(params);

This aligns with the requirement that each provider module exports:

  • generate<ProviderName>Text
  • stream<ProviderName>Text
  • generate<ProviderName>Object
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* src/ai-providers/chatgpt-oauth.js
*
* Implementation for interacting with OpenAI GPT-5 via ChatGPT OAuth
* using the ai-sdk-provider-chatgpt-oauth package (AI SDK v4 build/tag).
*/
import { BaseAIProvider } from './base-provider.js';
import { log } from '../../scripts/modules/utils.js';
import { generateText } from 'ai';
import { extractJsonTolerant } from '../utils/json-extract.js';
let createChatGPTOAuth;
async function loadChatGptOAuthModule() {
if (!createChatGPTOAuth) {
try {
const mod = await import('ai-sdk-provider-chatgpt-oauth');
createChatGPTOAuth = mod.createChatGPTOAuth || mod.chatgptOAuth;
if (!createChatGPTOAuth) {
throw new Error('createChatGPTOAuth export not found');
}
} catch (err) {
throw new Error(
"ChatGPT OAuth SDK is not installed. Please install 'ai-sdk-provider-chatgpt-oauth@ai-sdk-v4' to use the chatgpt-oauth provider."
);
}
}
}
export class ChatGPTOAuthProvider extends BaseAIProvider {
constructor() {
super();
this.name = 'ChatGPT OAuth';
}
/**
* ChatGPT OAuth does not require a traditional API key; it uses OAuth tokens.
* We return a descriptive env var name for visibility but do not require it.
*/
getRequiredApiKeyName() {
return 'CHATGPT_OAUTH_ACCESS_TOKEN';
}
isRequiredApiKey() {
return false;
}
/**
* Override validateAuth to skip API key validation.
*/
validateAuth(_params) {
// Auth handled internally by the SDK (reads ~/.codex/auth.json or env vars)
}
/**
* Creates and returns a ChatGPT OAuth client instance.
* @param {object} params - Parameters for client initialization
* @param {string} [params.baseURL] - Optional custom API endpoint
* @returns {Promise<Function>} ChatGPT OAuth client function
*/
async getClient(params) {
try {
await loadChatGptOAuthModule();
const options = {};
if (params?.baseURL) {
options.baseURL = params.baseURL;
}
// Optional reasoning controls (matches provider API):
// - reasoningEffort: 'low' | 'medium' | 'high' | null (disable)
// - reasoningSummary: 'auto' | 'none' | 'concise' | 'detailed' | null (omit)
if (typeof params?.reasoningEffort !== 'undefined') {
options.reasoningEffort = params.reasoningEffort;
}
if (typeof params?.reasoningSummary !== 'undefined') {
options.reasoningSummary = params.reasoningSummary;
}
// The provider will source credentials automatically from ~/.codex/auth.json
// or environment variables (CHATGPT_OAUTH_ACCESS_TOKEN, CHATGPT_OAUTH_ACCOUNT_ID, etc.).
const provider = createChatGPTOAuth(options);
return provider;
} catch (error) {
this.handleError('client initialization', error);
}
}
/**
* GPT-5 via ChatGPT backend does not honor maxTokens; keep default behavior.
* If needed in the future, we can override prepareTokenParam to suppress it.
*/
}
// --- Provider-specific message handling ---
// The ChatGPT backend expects a very specific `instructions` string (from codex-instructions.txt).
// Appending arbitrary system prompts to instructions can cause 400 "Instructions are not valid".
// To stay compatible, we demote system messages to a regular user message and keep
// the special instructions intact.
ChatGPTOAuthProvider.prototype._demoteSystemToUser = function (messages) {
if (!Array.isArray(messages) || messages.length === 0) return messages || [];
const systemParts = messages.filter(
(m) =>
m.role === 'system' &&
typeof m.content === 'string' &&
m.content.trim() !== ''
);
const nonSystem = messages.filter((m) => m.role !== 'system');
if (systemParts.length === 0) return nonSystem;
const combined = systemParts.map((m) => m.content).join('\n\n');
// Prepend a user message carrying the previous system content
return [{ role: 'user', content: combined }, ...nonSystem];
};
// Override base calls to pre-process messages
ChatGPTOAuthProvider.prototype.generateText = async function (params) {
const processed = {
...params,
messages: this._demoteSystemToUser(params.messages)
};
return BaseAIProvider.prototype.generateText.call(this, processed);
};
ChatGPTOAuthProvider.prototype.streamText = async function (params) {
const processed = {
...params,
messages: this._demoteSystemToUser(params.messages)
};
return BaseAIProvider.prototype.streamText.call(this, processed);
};
ChatGPTOAuthProvider.prototype.generateObject = async function (params) {
try {
this.validateParams(params);
this.validateMessages(params.messages);
if (!params.schema)
throw new Error('Schema is required for object generation');
if (!params.objectName)
throw new Error('Object name is required for object generation');
// Demote system prompts to user to avoid contaminating ChatGPT OAuth instructions
const demoted = this._demoteSystemToUser(params.messages);
// Prepend strict JSON enforcement as a user message
const jsonEnforcement =
'CRITICAL: You MUST respond with ONLY valid JSON. Do not include any explanatory text, markdown, code fences, or commentary. The first character must be { or [ and the last must be } or ]. Return exactly the object requested.';
const messages = [{ role: 'user', content: jsonEnforcement }, ...demoted];
const client = await this.getClient(params);
const result = await generateText({
model: client(params.modelId),
messages,
maxTokens: params.maxTokens,
temperature: params.temperature
});
const jsonText = extractJsonTolerant(result.text || '');
let parsed;
try {
parsed = JSON.parse(jsonText);
} catch (e) {
throw new Error(
`Failed to parse JSON from ChatGPT OAuth response: ${e.message}`
);
}
// Validate against provided Zod schema
const validated = params.schema.parse(parsed);
return {
object: validated,
usage: {
inputTokens: result.usage?.promptTokens,
outputTokens: result.usage?.completionTokens,
totalTokens: result.usage?.totalTokens
}
};
} catch (error) {
this.handleError('object generation', error);
}
};
// Provider wrapper exports
const __chatgptOAuthProvider = new ChatGPTOAuthProvider();
export const generateChatGPTOAuthText = (params) =>
__chatgptOAuthProvider.generateText(params);
export const streamChatGPTOAuthText = (params) =>
__chatgptOAuthProvider.streamText(params);
export const generateChatGPTOAuthObject = (params) =>
__chatgptOAuthProvider.generateObject(params);
🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js around lines 1-183, the module defines
ChatGPTOAuthProvider but lacks the required thin wrapper exports; add three
exported functions named generateChatGPTOAuthText, streamChatGPTOAuthText, and
generateChatGPTOAuthObject that each create an instance of ChatGPTOAuthProvider
(or reuse a single instance), call the corresponding instance method
(generateText, streamText, generateObject) with the incoming params, and
return/propagate the result (or thrown errors) so other code can import these
plain functions; ensure they forward all params unchanged and are exported from
the module.

Comment on lines +18 to +26
const mod = await import('ai-sdk-provider-chatgpt-oauth');
createChatGPTOAuth = mod.createChatGPTOAuth || mod.chatgptOAuth;
if (!createChatGPTOAuth) {
throw new Error('createChatGPTOAuth export not found');
}
} catch (err) {
throw new Error(
"ChatGPT OAuth SDK is not installed. Please install 'ai-sdk-provider-chatgpt-oauth@ai-sdk-v4' to use the chatgpt-oauth provider."
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Harden dynamic import: support default export fallback and validate factory shape

Current detection may miss default-exported factories. Add robust fallback and a clear install hint. This prevents false "export not found" errors.

Apply this diff:

-      const mod = await import('ai-sdk-provider-chatgpt-oauth');
-      createChatGPTOAuth = mod.createChatGPTOAuth || mod.chatgptOAuth;
-      if (!createChatGPTOAuth) {
-        throw new Error('createChatGPTOAuth export not found');
-      }
+      const mod = await import('ai-sdk-provider-chatgpt-oauth');
+      createChatGPTOAuth =
+        mod.createChatGPTOAuth ||
+        mod.chatgptOAuth ||
+        mod.default?.createChatGPTOAuth ||
+        mod.default?.chatgptOAuth ||
+        (typeof mod.default === 'function' ? mod.default : undefined);
+      if (typeof createChatGPTOAuth !== 'function') {
+        throw new Error(
+          'ChatGPT OAuth provider factory not found (createChatGPTOAuth). Ensure ai-sdk-provider-chatgpt-oauth@ai-sdk-v4 is installed.'
+        );
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const mod = await import('ai-sdk-provider-chatgpt-oauth');
createChatGPTOAuth = mod.createChatGPTOAuth || mod.chatgptOAuth;
if (!createChatGPTOAuth) {
throw new Error('createChatGPTOAuth export not found');
}
} catch (err) {
throw new Error(
"ChatGPT OAuth SDK is not installed. Please install 'ai-sdk-provider-chatgpt-oauth@ai-sdk-v4' to use the chatgpt-oauth provider."
);
try {
const mod = await import('ai-sdk-provider-chatgpt-oauth');
createChatGPTOAuth =
mod.createChatGPTOAuth ||
mod.chatgptOAuth ||
mod.default?.createChatGPTOAuth ||
mod.default?.chatgptOAuth ||
(typeof mod.default === 'function' ? mod.default : undefined);
if (typeof createChatGPTOAuth !== 'function') {
throw new Error(
'ChatGPT OAuth provider factory not found (createChatGPTOAuth). Ensure ai-sdk-provider-chatgpt-oauth@ai-sdk-v4 is installed.'
);
}
} catch (err) {
throw new Error(
"ChatGPT OAuth SDK is not installed. Please install 'ai-sdk-provider-chatgpt-oauth@ai-sdk-v4' to use the chatgpt-oauth provider."
);
}
🤖 Prompt for AI Agents
In src/ai-providers/chatgpt-oauth.js around lines 18 to 26, the dynamic import
currently only checks named exports and throws if createChatGPTOAuth isn't
found; update it to also check for a default export (e.g., mod.default) and
accept factory under either createChatGPTOAuth or chatgptOAuth keys or as the
default export, then validate that the resolved value is a function (or object
with expected factory method) before using it; if validation fails, throw a
clearer error that includes an install hint and the actual shape found (e.g.,
typeof resolved) to aid debugging.

Comment on lines +62 to +85
async getClient(params) {
try {
await loadChatGptOAuthModule();

const options = {};
if (params?.baseURL) {
options.baseURL = params.baseURL;
}

// Optional reasoning controls (matches provider API):
// - reasoningEffort: 'low' | 'medium' | 'high' | null (disable)
// - reasoningSummary: 'auto' | 'none' | 'concise' | 'detailed' | null (omit)
if (typeof params?.reasoningEffort !== 'undefined') {
options.reasoningEffort = params.reasoningEffort;
}
if (typeof params?.reasoningSummary !== 'undefined') {
options.reasoningSummary = params.reasoningSummary;
}

// The provider will source credentials automatically from ~/.codex/auth.json
// or environment variables (CHATGPT_OAUTH_ACCESS_TOKEN, CHATGPT_OAUTH_ACCOUNT_ID, etc.).
const provider = createChatGPTOAuth(options);
return provider;
} catch (error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Reasoning options passthrough: add light validation and document defaults

You forward reasoningEffort/reasoningSummary as-is. Add minimal validation to catch typos early and optionally default when ChatGPT OAuth is selected via CLI.

Apply this diff:

       const options = {};
       if (params?.baseURL) {
         options.baseURL = params.baseURL;
       }
@@
-      if (typeof params?.reasoningEffort !== 'undefined') {
+      if (typeof params?.reasoningEffort !== 'undefined') {
+        const allowedEffort = new Set(['low', 'medium', 'high', null]);
+        if (!allowedEffort.has(params.reasoningEffort)) {
+          throw new Error(
+            `Invalid reasoningEffort: ${params.reasoningEffort}. Expected one of 'low' | 'medium' | 'high' | null.`
+          );
+        }
         options.reasoningEffort = params.reasoningEffort;
       }
-      if (typeof params?.reasoningSummary !== 'undefined') {
+      if (typeof params?.reasoningSummary !== 'undefined') {
+        const allowedSummary = new Set(['auto', 'none', 'concise', 'detailed', null]);
+        if (!allowedSummary.has(params.reasoningSummary)) {
+          throw new Error(
+            `Invalid reasoningSummary: ${params.reasoningSummary}. Expected 'auto' | 'none' | 'concise' | 'detailed' | null.`
+          );
+        }
         options.reasoningSummary = params.reasoningSummary;
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async getClient(params) {
try {
await loadChatGptOAuthModule();
const options = {};
if (params?.baseURL) {
options.baseURL = params.baseURL;
}
// Optional reasoning controls (matches provider API):
// - reasoningEffort: 'low' | 'medium' | 'high' | null (disable)
// - reasoningSummary: 'auto' | 'none' | 'concise' | 'detailed' | null (omit)
if (typeof params?.reasoningEffort !== 'undefined') {
options.reasoningEffort = params.reasoningEffort;
}
if (typeof params?.reasoningSummary !== 'undefined') {
options.reasoningSummary = params.reasoningSummary;
}
// The provider will source credentials automatically from ~/.codex/auth.json
// or environment variables (CHATGPT_OAUTH_ACCESS_TOKEN, CHATGPT_OAUTH_ACCOUNT_ID, etc.).
const provider = createChatGPTOAuth(options);
return provider;
} catch (error) {
const options = {};
if (params?.baseURL) {
options.baseURL = params.baseURL;
}
// Optional reasoning controls (matches provider API):
// - reasoningEffort: 'low' | 'medium' | 'high' | null (disable)
// - reasoningSummary: 'auto' | 'none' | 'concise' | 'detailed' | null (omit)
if (typeof params?.reasoningEffort !== 'undefined') {
const allowedEffort = new Set(['low', 'medium', 'high', null]);
if (!allowedEffort.has(params.reasoningEffort)) {
throw new Error(
`Invalid reasoningEffort: ${params.reasoningEffort}. Expected one of 'low' | 'medium' | 'high' | null.`
);
}
options.reasoningEffort = params.reasoningEffort;
}
if (typeof params?.reasoningSummary !== 'undefined') {
const allowedSummary = new Set(['auto', 'none', 'concise', 'detailed', null]);
if (!allowedSummary.has(params.reasoningSummary)) {
throw new Error(
`Invalid reasoningSummary: ${params.reasoningSummary}. Expected 'auto' | 'none' | 'concise' | 'detailed' | null.`
);
}
options.reasoningSummary = params.reasoningSummary;
}

Comment on lines +15 to +19
export function extractJsonTolerant(text) {
if (!text || typeof text !== 'string') return text;

let content = text.trim();
if (content.length < 2) return text;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Request: Add/confirm unit tests for new extractor behaviors and size limits.

Ensure tests cover:

  • Code-fence variants (json/jsonc/json5/js/javascript/ts/typescript).
  • Comment-skipping in boundary detection (// and /* */ with stray braces).
  • Large JSON > 10k chars is handled (uses raised scan limit).
  • Legacy pass-through on clearly incomplete JSON (opening brace with no closing).

Run to verify presence and coverage of tests:

Also applies to: 100-109


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Extractor tests exist"
fd -a -t f 'json-extract.test.js' tests || true

echo "2) Grep for fence and comment cases"
rg -nP '```jsonc|```json5|```js|//|/\*' tests/unit/utils/json-extract.test.js -n -C2 || true

echo "3) Quick coverage run for the extractor file"
node --experimental-vm-modules node_modules/.bin/jest --coverage -- src/utils/json-extract.js

Length of output: 2493


Add missing unit tests for extractJsonTolerant behaviors

The existing tests in tests/unit/utils/json-extract.test.js cover basic markdown JSON fences and incomplete‐JSON fallbacks, but we still need to add or extend tests to ensure full coverage of the new extractor behaviors:

• Code-fence variants
– Test extraction from jsonc, json5, js, javascript, ts, and typescript code blocks.

• Comment-skipping in boundary detection
– Inline (// …) and block (/* … */) comments interspersed with stray { or } within fences.

• Large JSON payloads (>10 000 characters)
– Generate or load a JSON string >10 kB to confirm the raised scan limit is applied and no trimming or errors occur.

• Legacy pass-through on clearly incomplete JSON
– Confirm that inputs like { "foo": 1 or [1,2,3 still return the original text when repair fails (as partially covered around lines 177–187).

• Coverage verification
– The jest --coverage command didn’t run in CI (module not found). Please verify coverage manually or update the script (e.g. npx jest --coverage src/utils/json-extract.js) to ensure these new tests push the extractor file above our 80% threshold.

Affected file:

  • tests/unit/utils/json-extract.test.js (add new it blocks)
🤖 Prompt for AI Agents
In tests/unit/utils/json-extract.test.js (add new it blocks covering behaviors
referenced in src/utils/json-extract.js around lines 15–19), add unit tests
that: 1) verify extraction from code fences labeled jsonc, json5, js,
javascript, ts, and typescript; 2) verify fence-boundary detection correctly
skips inline (//…) and block (/*…*/) comments that contain stray { or } inside
fences; 3) validate handling of a large JSON payload (>10 KB) to ensure the
scanner limit is applied without trimming or errors; 4) confirm legacy
pass-through returns the original input for clearly incomplete JSON like '{
"foo": 1' and '[1,2,3' when repair fails; and 5) ensure coverage is measured
(run jest --coverage for that file or update package.json test script to run npx
jest --coverage src/utils/json-extract.js) so overall coverage for
src/utils/json-extract.js exceeds the 80% threshold.

Comment on lines +48 to +55
const result = parseJsonc(value, errors, {
allowTrailingComma: true,
allowEmptyContent: false
});
if (errors.length === 0 && result !== undefined) {
return JSON.stringify(result, null, 2);
}
} catch {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Explicitly allow comments in parse options.

Be explicit that comments are allowed in JSONC parsing to match the extractor’s intent.

Apply:

-			const result = parseJsonc(value, errors, {
-				allowTrailingComma: true,
-				allowEmptyContent: false
-			});
+			const result = parseJsonc(value, errors, {
+				allowTrailingComma: true,
+				allowEmptyContent: false,
+				disallowComments: false
+			});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const result = parseJsonc(value, errors, {
allowTrailingComma: true,
allowEmptyContent: false
});
if (errors.length === 0 && result !== undefined) {
return JSON.stringify(result, null, 2);
}
} catch {
const result = parseJsonc(value, errors, {
allowTrailingComma: true,
allowEmptyContent: false,
disallowComments: false
});
🤖 Prompt for AI Agents
In src/utils/json-extract.js around lines 48 to 55, the parseJsonc call doesn't
explicitly enable comment parsing; update the options object passed to
parseJsonc to include allowComments: true so the JSONC extractor's intent to
accept comments is enforced (i.e., add allowComments: true alongside
allowTrailingComma and allowEmptyContent).

@ben-vargas ben-vargas closed this Sep 6, 2025
@ben-vargas ben-vargas deleted the feat-codex-cli branch September 6, 2025 07:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant