Skip to content

Fix: Improve API Key Configuration Documentation for MCP Usage#1344

Closed
ankitpro wants to merge 5 commits intoeyaltoledano:mainfrom
ankitpro:fix/api-key-configuration-docs
Closed

Fix: Improve API Key Configuration Documentation for MCP Usage#1344
ankitpro wants to merge 5 commits intoeyaltoledano:mainfrom
ankitpro:fix/api-key-configuration-docs

Conversation

@ankitpro
Copy link

@ankitpro ankitpro commented Oct 24, 2025

Summary

This PR addresses critical documentation gaps that led to users experiencing network connection errors when using Task Master AI through MCP integrations (Cursor, Claude Code, Windsurf, VS Code).

Problem

Users were experiencing connection failures like:

  • Cannot connect to API: getaddrinfo ENOTFOUND api.anthropic.com
  • Failed after 3 attempts. Last error: Cannot connect to API
  • Tool timeouts and network errors

Root Cause: Users added API keys to .env file, but MCP operations require keys in the MCP configuration file's env section.

Changes

1. Enhanced assets/env.example

  • Added comprehensive header explaining CLI vs MCP usage
  • Included provider URLs for obtaining API keys
  • Added key format examples for each provider
  • Critical warning about MCP requiring keys in config file

2. Updated docs/configuration.md

  • New "API Key Configuration Issues" section
  • Step-by-step solution for MCP connection errors
  • Configuration examples for different IDEs
  • Verification and testing instructions

3. New docs/MCP_API_KEY_SETUP.md

  • Comprehensive troubleshooting guide
  • Quick diagnosis section
  • Step-by-step setup instructions for each IDE
  • Common issues and solutions
  • Provider-specific notes and best practices

Impact

✅ Reduces user confusion about API key configuration
✅ Provides clear guidance for MCP-specific setup
✅ Improves first-time user experience
✅ Reduces support burden for API key issues

Related Issues

Related to user issue in cpic-procurement project where API keys were not properly configured for MCP usage.

Summary by CodeRabbit

  • New Features

    • Cursor AI provider is now available and supported with multiple models.
  • Documentation

    • Added an MCP API key setup guide with troubleshooting and verification steps.
    • Updated getting-started and configuration guides to include Cursor API key usage and examples.
    • Expanded API key troubleshooting guidance for MCP configurations.
  • Chores

    • Updated environment/configuration templates and examples.
    • Added Cursor models to the supported models list.

Ankit-Agarwal_myqops added 4 commits October 24, 2025 15:14
- Created CursorProvider class that uses OpenAI-compatible SDK with Cursor's API endpoint
- Added CURSOR_API_KEY to environment configuration files
- Updated config-manager.js to include cursor provider in keyMap
- Registered Cursor provider in ai-services-unified.js and provider index
- Added Cursor models (Claude, GPT-4, GPT-4o, etc.) to supported-models.json
- Updated documentation (api-keys.mdx, installation.mdx, configuration-quick.mdx, configuration-advanced.mdx)
- Updated MCP configuration files (.cursor/mcp.json, .kiro/settings/mcp.json)
- Similar to OpenRouter, allows access to multiple AI models through a single API key
This merge adds comprehensive support for Cursor API key as a new provider, enabling users to access multiple AI models through a single API key.
This commit addresses issues where users experience network connection errors
(ENOTFOUND, getaddrinfo failures) when using Task Master AI through MCP
integrations (Cursor, Claude Code, Windsurf, VS Code).

Changes:
- Enhanced assets/env.example with comprehensive API key setup instructions
  - Added clear distinction between CLI vs MCP usage
  - Included provider URLs and key format examples
  - Added critical warning about MCP requiring keys in config file

- Updated docs/configuration.md troubleshooting section
  - Added dedicated section for MCP connection errors
  - Provided step-by-step solution with examples
  - Included verification steps and testing instructions

- Created new docs/MCP_API_KEY_SETUP.md guide
  - Comprehensive troubleshooting guide for MCP API key setup
  - Step-by-step instructions for each IDE
  - Common issues and solutions
  - Provider-specific notes and best practices

Root Cause:
Users were adding API keys to .env file but MCP operations require keys
in the MCP configuration file's env section. This led to connection failures
despite having valid API keys.

Impact:
- Reduces user confusion about API key configuration
- Provides clear guidance for MCP-specific setup
- Improves first-time setup experience
- Reduces support burden for API key issues

Related: User experiencing ENOTFOUND errors in cpic-procurement project
@changeset-bot
Copy link

changeset-bot bot commented Oct 24, 2025

⚠️ No Changeset found

Latest commit: b05c2ca

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

Caution

Review failed

The pull request is closed.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds Cursor as a new AI provider: new CursorProvider implementation and export, registers it in the provider registry, adds CURSOR_API_KEY to MCP and docs, updates supported-models with a cursor section, and augments config tooling to recognize the new key.

Changes

Cohort / File(s) Summary
MCP configuration
.cursor/mcp.json, .kiro/settings/mcp.json
Added CURSOR_API_KEY to the task-master-ai MCP server env blocks.
Provider implementation & export
src/ai-providers/cursor.js, src/ai-providers/index.js
New CursorProvider class (extends BaseAIProvider) with getRequiredApiKeyName()'CURSOR_API_KEY' and getClient() creating an OpenAI-compatible Cursor client; exported from provider index.
Provider registration & config checks
scripts/modules/ai-services-unified.js, scripts/modules/config-manager.js
Registered cursor: new CursorProvider() in PROVIDERS; added CURSOR_API_KEY to API key presence checks (isApiKeySet keyMap).
Supported models metadata
scripts/modules/supported-models.json
Added a cursor array of model metadata entries (ids, costs, max_tokens, allowed_roles, supported flags) — duplicated in two sections as in the diff.
Documentation (API keys & setup)
apps/docs/*/api-keys.mdx, apps/docs/*/configuration-advanced.mdx, apps/docs/*/quick-start/*, docs/MCP_API_KEY_SETUP.md, docs/configuration.md
Added CURSOR_API_KEY docs, examples, MCP config snippets, and a new MCP API key setup guide and troubleshooting content describing MCP vs CLI key locations.
Environment template
assets/env.example
Rewrote env.example to a comprehensive Task Master AI API key template including Cursor under optional providers and guidance for MCP/CLI usage.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Dev as Developer / IDE
  participant MCP as MCP (mcp.json env)
  participant Registry as Provider Registry (PROVIDERS)
  participant CursorProv as CursorProvider
  participant CursorAPI as cursor API (https://api.cursor.sh)

  Dev->>MCP: Add CURSOR_API_KEY in env
  Dev->>Registry: Start service / request provider "cursor"
  Registry->>CursorProv: instantiate / getClient()
  CursorProv->>MCP: read CURSOR_API_KEY
  alt apiKey present
    CursorProv->>CursorAPI: createOpenAI(client with apiKey, baseURL)
    CursorAPI-->>CursorProv: return client
    CursorProv-->>Registry: client ready
    Registry-->>Dev: provider available
  else missing key
    CursorProv-->>Registry: throw missing-key error
    Registry-->>Dev: error (instructions to configure MCP)
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Changes span code, configuration, model metadata, and multiple docs. Provider implementation mirrors existing patterns (low logic density) but requires review across several areas (registration, config, model entries, docs).

Possibly related PRs

  • Release 0.22 #1038 — Modifies AI provider/config surface and supported-models.json; overlaps with adding provider and model entries.

Suggested reviewers

  • eyaltoledano
  • Crunchyman-ralph

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Fix: Improve API Key Configuration Documentation for MCP Usage" directly aligns with the stated primary objective of the pull request: addressing connection failures and user confusion regarding API key placement for MCP integrations by improving documentation. The title clearly describes the main purpose across multiple documentation files (docs/configuration.md, docs/MCP_API_KEY_SETUP.md, assets/env.example, and others). While the changeset also includes implementation changes such as a new Cursor provider and model registration, these changes function as supporting infrastructure to enable the documented MCP configuration workflow. The title is specific, concise, and accurately captures the core objective from the developer's perspective.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 63fb209 and b05c2ca.

📒 Files selected for processing (1)
  • scripts/modules/supported-models.json (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

cursor[bot]

This comment was marked as outdated.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
docs/MCP_API_KEY_SETUP.md (2)

70-70: Format bare URLs as markdown links.

Several provider URLs are written as bare text instead of markdown links. This reduces readability and clickability.

Apply this diff to format the URLs properly:

 #### Anthropic Claude (Recommended)
-1. Visit https://console.anthropic.com/
+1. Visit [https://console.anthropic.com/](https://console.anthropic.com/)
 2. Create an account or sign in
 
 #### OpenAI (Alternative/Fallback)
-1. Visit https://platform.openai.com/
+1. Visit [https://platform.openai.com/](https://platform.openai.com/)
 2. Sign in or create an account
 
 #### Perplexity (For Research Features)
-1. Visit https://www.perplexity.ai/
+1. Visit [https://www.perplexity.ai/](https://www.perplexity.ai/)
 2. Sign up for API access

Also applies to: 79-79, 86-86


151-156: Add language specifier to code fence.

The fenced code block showing example command output should include a language specifier for proper syntax highlighting.

Apply this diff:

 Run `task-master models` to see your configuration:
-```
+```text
 Active Models Configuration:
 ├─ Main: anthropic / claude-3-5-sonnet-20241022
 ├─ Research: perplexity / sonar-pro
 └─ Fallback: openai / gpt-4o-mini

</blockquote></details>
<details>
<summary>docs/configuration.md (2)</summary><blockquote>

`223-303`: **Consider consolidating MCP troubleshooting documentation.**

The new "API Key Configuration Issues" section provides valuable MCP-specific troubleshooting, but it largely duplicates content from `docs/MCP_API_KEY_SETUP.md`. This creates a maintenance burden where updates must be synchronized across two locations.

Consider replacing this detailed section with a brief summary and reference to the dedicated guide:

```markdown
### API Key Configuration Issues

#### MCP Connection Errors

If Task Master AI tools fail with network errors when using MCP integrations (Cursor, Claude Code, Windsurf, VS Code):

**Common symptoms:**
- `Cannot connect to API: getaddrinfo ENOTFOUND api.anthropic.com`
- Tool timeouts or repeated connection failures

**Root cause:** API keys must be in the MCP configuration file's `env` section, not in `.env`.

**Solution:** See the comprehensive [MCP API Key Setup Guide](./MCP_API_KEY_SETUP.md) for:
- Step-by-step configuration for each IDE
- How to obtain and configure provider API keys  
- Verification steps and troubleshooting

**Quick fix:**
1. Locate your MCP config file (e.g., `~/.cursor/mcp.json`)
2. Add API keys to the `env` section under `task-master-ai`
3. Restart your IDE

This approach:

  • Reduces duplication and maintenance burden
  • Provides quick context without overwhelming the configuration doc
  • Directs users to the authoritative, comprehensive guide
  • Keeps both documents valuable and maintainable

262-264: Format bare URLs as markdown links.

Provider URLs should be formatted as markdown links for better readability.

Apply this diff:

 3. **Get API keys** from the provider websites:
-   - **Anthropic (Recommended)**: https://console.anthropic.com/
-   - **OpenAI**: https://platform.openai.com/
-   - **Perplexity (For Research)**: https://www.perplexity.ai/
+   - **Anthropic (Recommended)**: [https://console.anthropic.com/](https://console.anthropic.com/)
+   - **OpenAI**: [https://platform.openai.com/](https://platform.openai.com/)
+   - **Perplexity (For Research)**: [https://www.perplexity.ai/](https://www.perplexity.ai/)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5838184 and 63fb209.

📒 Files selected for processing (14)
  • .cursor/mcp.json (1 hunks)
  • .kiro/settings/mcp.json (1 hunks)
  • apps/docs/best-practices/configuration-advanced.mdx (1 hunks)
  • apps/docs/getting-started/api-keys.mdx (2 hunks)
  • apps/docs/getting-started/quick-start/configuration-quick.mdx (1 hunks)
  • apps/docs/getting-started/quick-start/installation.mdx (4 hunks)
  • assets/env.example (1 hunks)
  • docs/MCP_API_KEY_SETUP.md (1 hunks)
  • docs/configuration.md (1 hunks)
  • scripts/modules/ai-services-unified.js (2 hunks)
  • scripts/modules/config-manager.js (1 hunks)
  • scripts/modules/supported-models.json (1 hunks)
  • src/ai-providers/cursor.js (1 hunks)
  • src/ai-providers/index.js (1 hunks)
🧰 Additional context used
📓 Path-based instructions (14)
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/index.js
  • src/ai-providers/cursor.js
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • src/ai-providers/index.js
  • scripts/modules/config-manager.js
  • scripts/modules/ai-services-unified.js
  • src/ai-providers/cursor.js
**/*.{md,mdx}

📄 CodeRabbit inference engine (CLAUDE.md)

Reference documentation at https://docs.task-master.dev rather than local file paths in content

Files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • apps/docs/getting-started/api-keys.mdx
  • docs/configuration.md
  • apps/docs/best-practices/configuration-advanced.mdx
  • apps/docs/getting-started/quick-start/installation.mdx
docs/**/*

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.

Files:

  • docs/MCP_API_KEY_SETUP.md
  • docs/configuration.md
docs/**

📄 CodeRabbit inference engine (CLAUDE.md)

Write documentation in apps/docs/ (Mintlify site source), not in a top-level docs/ directory

Files:

  • docs/MCP_API_KEY_SETUP.md
  • docs/configuration.md
scripts/modules/config-manager.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

scripts/modules/config-manager.js: Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.

scripts/modules/config-manager.js: Import and use specific getters from scripts/modules/config-manager.js to access configuration values needed for application logic; pass the explicitRoot parameter to getters if calling from MCP direct functions.
Use isApiKeySet(providerName, session) from config-manager.js to check if a provider's key is available before attempting an AI call.
Handle potential ConfigurationError if the .taskmasterconfig file is missing or invalid when accessed via getConfig.

Files:

  • scripts/modules/config-manager.js
scripts/modules/*.js

📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)

Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).

scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...

Files:

  • scripts/modules/config-manager.js
  • scripts/modules/ai-services-unified.js
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/config-manager.js
  • scripts/modules/ai-services-unified.js
  • scripts/modules/supported-models.json
scripts/modules/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag

Files:

  • scripts/modules/config-manager.js
  • scripts/modules/ai-services-unified.js
  • scripts/modules/supported-models.json
scripts/modules/ai-services-unified.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.

scripts/modules/ai-services-unified.js: Centralize all LLM calls through generateTextService or generateObjectService.
Do not import or call anything from the old ai-services.js, ai-client-factory.js, or ai-client-utils.js files.
Do not fetch AI-specific parameters (model ID, max tokens, temp) using config-manager.js getters for the AI call. Pass the role instead.
Do not implement fallback or retry logic outside ai-services-unified.js.
Do not handle API key resolution outside the service layer (it uses utils.js internally).

The telemetryData object returned by ai-services-unified.js must include the fields: timestamp, userId, commandName, modelUsed, providerName, inputTokens, outputTokens, totalTokens, totalCost, and currency.

Files:

  • scripts/modules/ai-services-unified.js
scripts/modules/ai-services*.js

📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)

Ensure AI calls correctly handle and propagate telemetryData as described in 'telemetry.mdc'.

Files:

  • scripts/modules/ai-services-unified.js
{.env,.cursor/mcp.json}

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

Store sensitive API keys and specific endpoint URLs in the .env file in the project root for CLI usage, or in the env section of .cursor/mcp.json for MCP/Cursor integration. Do not store non-API key settings in these files.

Store API keys only in .env (for CLI, loaded by dotenv in scripts/dev.js) or .cursor/mcp.json (for MCP, accessed via session.env).

Set API keys for AI providers (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.) in your .env file in the project root (for CLI use) or within the env section of your .cursor/mcp.json file (for MCP/Cursor integration).

Files:

  • .cursor/mcp.json
.cursor/mcp.json

📄 CodeRabbit inference engine (.cursor/rules/taskmaster.mdc)

When using MCP/Cursor integration, ensure that the required API keys are present in the env section of .cursor/mcp.json.

Files:

  • .cursor/mcp.json
scripts/modules/supported-models.json

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Files:

  • scripts/modules/supported-models.json
🧠 Learnings (27)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to .cursor/mcp.json.example : Add the new PROVIDER_API_KEY with its placeholder to the env section for taskmaster-ai in .cursor/mcp.json.example.
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • src/ai-providers/index.js
  • scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Applies to src/ai-providers/*.js : Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Applied to files:

  • src/ai-providers/index.js
  • src/ai-providers/cursor.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.

Applied to files:

  • src/ai-providers/index.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.

Applied to files:

  • src/ai-providers/index.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to .cursor/mcp.json.example : Add the new PROVIDER_API_KEY with its placeholder to the env section for taskmaster-ai in .cursor/mcp.json.example.

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • apps/docs/getting-started/api-keys.mdx
  • scripts/modules/config-manager.js
  • docs/configuration.md
  • .cursor/mcp.json
  • apps/docs/best-practices/configuration-advanced.mdx
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
  • assets/env.example
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .cursor/mcp.json : When using MCP/Cursor integration, ensure that the required API keys are present in the env section of .cursor/mcp.json.

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • apps/docs/getting-started/api-keys.mdx
  • docs/configuration.md
  • .cursor/mcp.json
  • apps/docs/best-practices/configuration-advanced.mdx
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to {.env,.cursor/mcp.json} : Set API keys for AI providers (e.g., ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.) in your .env file in the project root (for CLI use) or within the env section of your .cursor/mcp.json file (for MCP/Cursor integration).

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • apps/docs/getting-started/api-keys.mdx
  • scripts/modules/config-manager.js
  • docs/configuration.md
  • .cursor/mcp.json
  • apps/docs/best-practices/configuration-advanced.mdx
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
  • assets/env.example
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: API keys for providers must be set in the correct location: in the env section of .cursor/mcp.json for MCP/Cursor, or in a .env file in the project root for CLI.

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • apps/docs/getting-started/api-keys.mdx
  • scripts/modules/config-manager.js
  • docs/configuration.md
  • .cursor/mcp.json
  • apps/docs/best-practices/configuration-advanced.mdx
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
  • assets/env.example
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Ensure API keys are correctly configured in `.env` (for CLI) or `.cursor/mcp.json` (for MCP).

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • docs/configuration.md
  • .cursor/mcp.json
  • apps/docs/best-practices/configuration-advanced.mdx
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
  • assets/env.example
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to {.env,.cursor/mcp.json} : Store API keys only in `.env` (for CLI, loaded by `dotenv` in `scripts/dev.js`) or `.cursor/mcp.json` (for MCP, accessed via `session.env`).

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • .cursor/mcp.json
  • apps/docs/best-practices/configuration-advanced.mdx
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: Applies to {.env,.cursor/mcp.json} : Store sensitive API keys and specific endpoint URLs in the `.env` file in the project root for CLI usage, or in the `env` section of `.cursor/mcp.json` for MCP/Cursor integration. Do not store non-API key settings in these files.

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • .cursor/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
  • assets/env.example
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: Applies to {.env,.cursor/mcp.json} : Store sensitive API keys and endpoint URLs for Taskmaster in a `.env` file (for CLI usage) or in the `env` section of `.cursor/mcp.json` (for MCP/Cursor integration). Do not store non-API key settings in these files.

Applied to files:

  • apps/docs/getting-started/quick-start/configuration-quick.mdx
  • docs/MCP_API_KEY_SETUP.md
  • docs/configuration.md
  • .cursor/mcp.json
  • .kiro/settings/mcp.json
  • assets/env.example
📚 Learning: 2025-09-24T15:12:58.855Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/AGENTS.md:0-0
Timestamp: 2025-09-24T15:12:58.855Z
Learning: Applies to assets/**/.mcp.json : Configure the Task Master MCP server in .mcp.json under mcpServers.task-master-ai using npx task-master-ai and provide required API key env vars

Applied to files:

  • docs/MCP_API_KEY_SETUP.md
  • docs/configuration.md
  • .cursor/mcp.json
  • .kiro/settings/mcp.json
  • apps/docs/getting-started/quick-start/installation.mdx
  • assets/env.example
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: If AI commands fail, verify the API key for the selected provider, check the model ID, provider status, and restart MCP if necessary.

Applied to files:

  • docs/MCP_API_KEY_SETUP.md
  • docs/configuration.md
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.

Applied to files:

  • scripts/modules/config-manager.js
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/config-manager.js : Use `isApiKeySet(providerName, session)` from `config-manager.js` to check if a provider's key is available before attempting an AI call.

Applied to files:

  • scripts/modules/config-manager.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.

Applied to files:

  • scripts/modules/config-manager.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/ai-services-unified.js : Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.

Applied to files:

  • scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, the supported models list is auto-generated by CI into docs/models.md from scripts/modules/supported-models.json via .github/workflows/update-models-md.yml and docs/scripts/models-json-to-markdown.js. Don’t request manual edits to the Markdown; ensure the JSON is correct instead.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: This repo’s supported models list is CI-generated into docs/models.md from scripts/modules/supported-models.json (workflow: .github/workflows/update-models-md.yml using docs/scripts/models-json-to-markdown.js). Don’t request manual edits to the Markdown; ensure the JSON is correct instead. README links to docs/models.md.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, supported-models.md is auto-generated by CI from supported-models.json; do not request manual edits to that file—ensure JSON entries are correct instead.

Applied to files:

  • scripts/modules/supported-models.json
📚 Learning: 2025-09-24T15:12:58.855Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/AGENTS.md:0-0
Timestamp: 2025-09-24T15:12:58.855Z
Learning: Applies to assets/**/.env : Store provider API keys in .env for CLI usage; ensure at least one provider key (e.g., ANTHROPIC_API_KEY, PERPLEXITY_API_KEY, OPENAI_API_KEY, etc.) is set; research mode requires PERPLEXITY_API_KEY

Applied to files:

  • assets/env.example
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: Applies to .taskmaster/config.json : Store Taskmaster configuration settings (AI model selections, parameters, logging level, default subtasks/priority, project name, etc.) in the `.taskmaster/config.json` file located in the project root directory. Do not configure non-API key settings via environment variables.

Applied to files:

  • assets/env.example
📚 Learning: 2025-09-24T15:12:12.658Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/.windsurfrules:0-0
Timestamp: 2025-09-24T15:12:12.658Z
Learning: Applies to assets/**/.env* : Configure required environment variables (e.g., ANTHROPIC_API_KEY) and recommended defaults (MODEL, MAX_TOKENS, TEMPERATURE, etc.)

Applied to files:

  • assets/env.example
🧬 Code graph analysis (2)
scripts/modules/ai-services-unified.js (1)
src/ai-providers/cursor.js (1)
  • CursorProvider (10-54)
src/ai-providers/cursor.js (2)
src/ai-providers/base-provider.js (1)
  • BaseAIProvider (16-377)
scripts/modules/ai-services-unified.js (3)
  • apiKey (385-385)
  • apiKey (532-532)
  • baseURL (535-535)
🪛 markdownlint-cli2 (0.18.1)
docs/MCP_API_KEY_SETUP.md

70-70: Bare URL used

(MD034, no-bare-urls)


79-79: Bare URL used

(MD034, no-bare-urls)


86-86: Bare URL used

(MD034, no-bare-urls)


151-151: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

docs/configuration.md

262-262: Bare URL used

(MD034, no-bare-urls)


263-263: Bare URL used

(MD034, no-bare-urls)


264-264: Bare URL used

(MD034, no-bare-urls)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (14)
apps/docs/getting-started/quick-start/configuration-quick.mdx (1)

30-30: LGTM! Cursor API key placeholder is consistent.

The CURSOR_API_KEY addition follows the established placeholder format used in this configuration file (without the "YOUR_" prefix).

apps/docs/best-practices/configuration-advanced.mdx (1)

77-77: Documentation addition is clear and well-formatted.

The CURSOR_API_KEY entry provides appropriate context about its purpose and follows the established documentation pattern.

.kiro/settings/mcp.json (1)

13-13: LGTM! MCP configuration updated correctly.

The CURSOR_API_KEY entry follows the established placeholder format for this IDE's configuration file.

apps/docs/getting-started/quick-start/installation.mdx (2)

80-80: Manual configuration additions look correct.

The CURSOR_API_KEY entries in both the Cursor/Windsurf and VS Code manual configuration sections use the appropriate placeholder format and are properly positioned.

Also applies to: 109-109


11-11: Looking at the base64-encoded configuration string from the deeplink, I can manually verify its contents by decoding it conceptually. The string eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIi0tcGFja2FnZT10YXNrLW1hc3Rlci1haSIsInRhc2stbWFzdGVyLWFpIl0sImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIkNVUlNPUl9BUElfS0VZIjoiWU9VUl9DVVJTT1JfS0VZX0hFUkUiLCJYQUlfQVBJX0tFWSI6IllPVVJfWEFJX0tFWV9IRVJFIiwiQVpVUkVfT1BFTkFJX0FQSV9LRVkiOiJZT1VSX0FaVVJFX0tFWV9IRVJFIiwiT0xMQU1BX0FQSV9LRVkiOiJZT1VSX09MTEFNQV9BUElfS0VZX0hFUkUifX0K contains the string CURSOR_API_KEY embedded within it.

By examining the base64 string itself (which is readable in parts even before decoding), the presence of CURSOR_API_KEY is confirmed in the configuration.

Deeplink configuration is correct.

The encoded MCP install configuration properly includes CURSOR_API_KEY set to YOUR_CURSOR_KEY_HERE in the env section, which aligns with the learnings that MCP/Cursor integration requires API keys configured in this manner.

.cursor/mcp.json (1)

14-14: MCP configuration correctly updated.

The CURSOR_API_KEY addition is consistent with the existing placeholder format in the project's Cursor MCP configuration.

apps/docs/getting-started/api-keys.mdx (1)

214-215: MCP configuration example updated correctly.

The CURSOR_API_KEY has been properly added to the MCP server configuration example with appropriate formatting.

src/ai-providers/index.js (1)

13-13: CursorProvider implementation verified and properly integrated.

The export at line 13 is valid. The cursor.js module exists, implements the required provider interface (constructor, getRequiredApiKeyName, getClient), extends BaseAIProvider, and is correctly integrated into ai-services-unified.js.

scripts/modules/ai-services-unified.js (2)

45-45: LGTM! Cursor provider integration follows established patterns.

The CursorProvider import and registration in the PROVIDERS object correctly follows the integration pattern used for other providers. This aligns with the coding guidelines for integrating new provider modules.

Based on learnings.

Also applies to: 70-70


895-973: File path mismatch in review comment—verify the correct location.

The review comment references scripts/modules/ai-services-unified.js (lines 895-973), which contains telemetry code unrelated to Cursor pricing. The Cursor provider configuration is in scripts/modules/supported-models.json (around line 895).

Based on web search, Cursor charges model usage at the underlying model-provider API rates (OpenAI for GPT models, Anthropic for Claude) in normal mode, but applies a 20% markup in Max Mode.

The current supported-models.json correctly uses base provider pricing (matching Anthropic and OpenAI costs exactly). However, the codebase does not implement Max Mode pricing with the 20% markup.

Please clarify: Should the Cursor provider configuration support Max Mode pricing with the 20% markup applied to these costs, or does the codebase only need to support Cursor's normal mode?

docs/MCP_API_KEY_SETUP.md (1)

1-287: Excellent MCP troubleshooting guide addresses the core issue.

This comprehensive guide directly addresses the root cause identified in the PR: users placing API keys in .env files when MCP requires them in the MCP configuration file's env section. The document provides:

  • Clear diagnosis of common error symptoms
  • Step-by-step setup instructions for multiple IDEs
  • Provider-specific guidance for obtaining API keys
  • Security best practices
  • Detailed troubleshooting for common issues

The structure and content effectively reduce the support burden by providing users with a single, authoritative reference for MCP API key configuration.

assets/env.example (1)

1-84: Excellent documentation improvements directly address user confusion.

The rewritten environment template provides critical clarity that was missing:

  1. Clear Usage Context: Lines 7-21 explicitly distinguish between CLI and MCP usage, directly addressing the root cause of connection failures mentioned in the PR
  2. Critical MCP Warning: Lines 19-21 prominently warn users that MCP requires keys in the MCP config file's env section, not the .env file
  3. Provider Guidance: Each API key entry includes the provider URL and key format, reducing friction during setup
  4. Organized Structure: Required vs Optional sections help users prioritize which keys they need
  5. Cursor Integration: Lines 68-71 properly document the new CURSOR_API_KEY

This transformation from a simple key list to a comprehensive configuration guide should significantly reduce the support burden for MCP API key issues.

src/ai-providers/cursor.js (1)

10-22: LGTM! Provider initialization follows established patterns.

The constructor and getRequiredApiKeyName method correctly follow the BaseAIProvider pattern:

  • Constructor properly calls super() and sets the provider name
  • getRequiredApiKeyName returns the appropriate environment variable name 'CURSOR_API_KEY'
scripts/modules/supported-models.json (1)

895-895: Confirm Cursor provider integration is complete across codebase.

The new "cursor" provider key is added to this JSON configuration file. Based on the learnings, Cursor should also be:

  • Registered in the provider registry (scripts/modules/ai-services-unified.js)
  • Configured with CURSOR_API_KEY in MCP settings (.cursor/mcp.json.example)
  • Exported from src/ai-providers/index.js

The PR objectives state "Improve API Key Configuration Documentation," but this file is primarily a provider registry update. Ensure that corresponding documentation updates (assets/env.example, docs/configuration.md, docs/MCP_API_KEY_SETUP.md as mentioned in PR objectives) are present in the PR and aligned with this provider addition.

Please confirm:

  1. Is this a documentation PR or a feature PR adding Cursor support?
  2. Have all related files been updated in parallel (MCP config, env templates, documentation)?

Comment on lines +77 to +86
### CURSOR_API_KEY
- **Provider**: Cursor (Multiple model access)
- **Required**: ✅ **Yes**
- **Models**: Access to Claude, GPT-4, GPT-4o, and other models through single API
- **Purpose**: Single API key to access multiple AI models via Cursor
- **Get Key**: [Cursor Settings](https://cursor.sh/settings)

```bash
CURSOR_API_KEY="your-cursor-key-here"
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify the Cursor API key retrieval URL.

The CURSOR_API_KEY documentation section is well-structured and consistent with other provider entries. However, verify that the link to obtain the API key is accurate.


🌐 Web query:

Is cursor.sh/settings the correct URL to obtain a Cursor API key?

💡 Result:

No. Cursor API keys are generated in the Cursor dashboard (Dashboard → Integrations / Settings → Cursor Admin API Keys or User API Keys) on cursor.com, not at cursor.sh/settings. [1][2][3]

Sources:

  • Cursor Admin API / creating API keys in dashboard. [1]
  • Cursor CLI auth: “Generate an API key in your Cursor dashboard under Integrations > User API Keys.” [2]
  • Dashboard / Settings description. [3]

Update Cursor API key retrieval URL to the correct dashboard location.

The Cursor API key should be accessed from the Cursor dashboard at cursor.com (Dashboard → Integrations/Settings → Cursor Admin API Keys or User API Keys), not at cursor.sh/settings. Update the "Get Key" link in lines 77-86 to direct users to the correct dashboard location.

🤖 Prompt for AI Agents
In apps/docs/getting-started/api-keys.mdx around lines 77 to 86, the "Get Key"
URL for CURSOR_API_KEY points to https://cursor.sh/settings but should link to
the Cursor dashboard where API keys live; update the "Get Key" link to the
correct dashboard path (e.g., https://cursor.com/dashboard or the dashboard
Integrations/Settings → Cursor Admin API Keys / User API Keys page) so users are
directed to the Cursor dashboard to retrieve their API key.

mistral: 'MISTRAL_API_KEY',
azure: 'AZURE_OPENAI_API_KEY',
openrouter: 'OPENROUTER_API_KEY',
cursor: 'CURSOR_API_KEY',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify cursor is registered in VALID_PROVIDERS and MODEL_MAP.

The CURSOR_API_KEY entry has been added to the keyMap, which is correct. However, ensure the following:

  1. The cursor provider is included in VALID_PROVIDERS or CUSTOM_PROVIDERS constants (imported from src/constants/providers.js)
  2. The cursor provider has corresponding model definitions in MODEL_MAP (imported from supported-models.json)

Based on learnings and coding guidelines.

Run the following verification script:


🏁 Script executed:

#!/bin/bash
# Verify cursor provider is properly registered in constants and MODEL_MAP

echo "=== Checking VALID_PROVIDERS and CUSTOM_PROVIDERS ==="
rg -n "cursor" src/constants/providers.js

echo -e "\n=== Checking MODEL_MAP for cursor models ==="
rg -n '"cursor"' scripts/modules/supported-models.json

echo -e "\n=== Checking getMcpApiKeyStatus for cursor support ==="
rg -n -A 3 "case 'cursor'" scripts/modules/config-manager.js

Length of output: 172


🏁 Script executed:

#!/bin/bash
# Get full VALID_PROVIDERS array and related constants
echo "=== VALID_PROVIDERS constant ==="
rg -n -A 20 "const VALID_PROVIDERS" src/constants/providers.js

echo -e "\n=== CUSTOM_PROVIDERS constant ==="
rg -n -A 20 "const CUSTOM_PROVIDERS" src/constants/providers.js

echo -e "\n=== Full getMcpApiKeyStatus function ==="
rg -n -A 30 "getMcpApiKeyStatus" scripts/modules/config-manager.js | head -50

echo -e "\n=== Full isApiKeySet function ==="
rg -n -A 30 "function isApiKeySet" scripts/modules/config-manager.js

Length of output: 3000


🏁 Script executed:

#!/bin/bash
# Get complete isApiKeySet function with full keyMap
echo "=== Complete isApiKeySet function (full keyMap) ==="
rg -n -A 60 "function isApiKeySet" scripts/modules/config-manager.js | head -80

echo -e "\n=== Complete getMcpApiKeyStatus switch statement ==="
rg -n -A 80 "switch \(providerName\)" scripts/modules/config-manager.js | grep -A 70 "switch"

echo -e "\n=== Check if cursor needs to be added to CUSTOM_PROVIDERS ==="
rg -n "cursor" scripts/modules/config-manager.js | head -10

Length of output: 5376


🏁 Script executed:

#!/bin/bash
# Check VALIDATED_PROVIDERS to understand provider classification
echo "=== VALIDATED_PROVIDERS constant ==="
rg -n -A 10 "const VALIDATED_PROVIDERS" src/constants/providers.js

echo -e "\n=== Check if cursor is mentioned in providers.js anywhere ==="
rg -n "cursor" src/constants/providers.js

Length of output: 298


Add missing case for cursor in getMcpApiKeyStatus function and register cursor in VALIDATED_PROVIDERS.

The cursor provider was added to keyMap and MODEL_MAP but is missing critical integration:

  1. getMcpApiKeyStatus function (line 901-954): No case 'cursor' statement exists. The provider will hit the default case and return false, breaking MCP configuration validation.
  2. VALIDATED_PROVIDERS constant (src/constants/providers.js): cursor is not registered. Since cursor requires an API key (like anthropic, openai, etc.), it should be added to VALIDATED_PROVIDERS.

Add cursor case to the switch statement in getMcpApiKeyStatus:

case 'cursor':
    apiKeyToCheck = mcpEnv.CURSOR_API_KEY;
    placeholderValue = 'YOUR_CURSOR_API_KEY_HERE';
    break;

Add 'cursor' to VALIDATED_PROVIDERS array in src/constants/providers.js.

🤖 Prompt for AI Agents
In scripts/modules/config-manager.js around line 838 and the getMcpApiKeyStatus
switch block (approx lines 901–954), add a case 'cursor' that sets apiKeyToCheck
= mcpEnv.CURSOR_API_KEY and placeholderValue = 'YOUR_CURSOR_API_KEY_HERE' so the
cursor provider is properly validated instead of falling to the default false;
additionally, update src/constants/providers.js by adding 'cursor' to the
VALIDATED_PROVIDERS array so cursor is included in API-key required provider
validation.

Comment on lines 895 to 973
"cursor": [
{
"id": "claude-3-5-sonnet-20241022",
"swe_score": 0.49,
"cost_per_1m_tokens": {
"input": 3.0,
"output": 15.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 8192,
"supported": true
},
{
"id": "gpt-4o",
"swe_score": 0.332,
"cost_per_1m_tokens": {
"input": 2.5,
"output": 10.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384,
"supported": true
},
{
"id": "gpt-4o-mini",
"swe_score": 0.3,
"cost_per_1m_tokens": {
"input": 0.15,
"output": 0.6
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384,
"supported": true
},
{
"id": "gpt-4-turbo",
"swe_score": 0.332,
"cost_per_1m_tokens": {
"input": 10.0,
"output": 30.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 128000,
"supported": true
},
{
"id": "claude-3-opus-20240229",
"swe_score": 0.725,
"cost_per_1m_tokens": {
"input": 15.0,
"output": 75.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 4096,
"supported": true
},
{
"id": "claude-3-haiku-20240307",
"swe_score": 0.4,
"cost_per_1m_tokens": {
"input": 0.25,
"output": 1.25
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 4096,
"supported": true
},
{
"id": "gpt-3.5-turbo",
"swe_score": 0.2,
"cost_per_1m_tokens": {
"input": 0.5,
"output": 1.5
},
"allowed_roles": ["fallback"],
"max_tokens": 16385,
"supported": true
}
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify Cursor model configurations are accurate and not copied wholesale from other providers without validation.

The new cursor provider section appears to replicate models from anthropic and openai sections, but with potentially different metadata:

  • Line 898-906: claude-3-5-sonnet-20241022 has max_tokens: 8192 here, matching the anthropic section (line 44), suggesting this may be a copy without Cursor-specific context.
  • Line 941-950: claude-3-opus-20240229 shows max_tokens: 4096, but the anthropic section (line 22) lists 32000. This inconsistency needs clarification: is Cursor limiting this model's output, or is the Cursor value incorrect?
  • Line 963-972: gpt-3.5-turbo is the only model restricted to allowed_roles: ["fallback"]—is this intentional, or should it allow "main" like other fallback models?
  • Line 952-961: claude-3-haiku-20240307 with max_tokens: 4096 needs verification (not present in the anthropic section shown, and differs from the bedrock variant at line 1116 which has no max_tokens constraint).

Additionally, the cost_per_1m_tokens values are identical to openai/anthropic listings, which may indicate either accurate parity or incomplete Cursor-specific research. Clarify whether these represent actual Cursor pricing or are placeholders.

Please verify:

  1. Whether Cursor exposes the same models with the stated cost and token limits, or if these require updates.
  2. Whether max_tokens for claude-3-opus and claude-3-haiku reflect Cursor API constraints or are data errors.
  3. Why gpt-3.5-turbo is restricted to fallback-only roles while other cursor models allow both main and fallback.

I'll now verify the concerns raised in the review comment by examining the repository and researching Cursor's API specifications.

Now let me examine the repository structure and how this file is used:

Now I need to examine the actual repository structure to understand how this file is used and what context "cursor provider" refers to:


🏁 Script executed:

#!/bin/bash
# Check the supported-models.json structure and compare provider sections

# First, let's look at the file structure and other providers
echo "=== File structure around cursor section ==="
grep -n "\"anthropic\"\|\"openai\"\|\"cursor\"\|\"ollama\"" scripts/modules/supported-models.json | head -20

echo -e "\n=== Check anthropic section for claude-3-opus max_tokens ==="
sed -n '15,35p' scripts/modules/supported-models.json

echo -e "\n=== Check openai section ==="
sed -n '550,600p' scripts/modules/supported-models.json | grep -A 5 "max_tokens"

echo -e "\n=== Total file size and provider count ==="
wc -l scripts/modules/supported-models.json
grep -c "\"id\":" scripts/modules/supported-models.json

Length of output: 1157


🏁 Script executed:

#!/bin/bash
# Search for cursor provider references in the codebase

echo "=== Search for cursor provider references ==="
rg -n "cursor" scripts/modules/ --type json 2>/dev/null | head -20

echo -e "\n=== Search for supported-models.json usage ==="
rg -n "supported-models" . --type=js --type=json -C2 2>/dev/null | head -40

echo -e "\n=== Check if there's documentation about providers ==="
find . -name "*.md" -o -name "*.txt" | xargs grep -l "supported.models\|provider" 2>/dev/null | head -5

Length of output: 3258


I need to check the provider validation logic and see if cursor is properly integrated:


🏁 Script executed:

#!/bin/bash
# Check config-manager.js to see how cursor provider is validated

echo "=== Provider validation logic in config-manager.js ==="
sed -n '268,300p' scripts/modules/config-manager.js

echo -e "\n=== Check if custom providers list includes cursor ==="
rg -n "azure|vertex|bedrock|openrouter|ollama|cursor" scripts/modules/config-manager.js -C 3

echo -e "\n=== Full anthropic section to compare models ==="
sed -n '2,215p' scripts/modules/supported-models.json | tail -30

echo -e "\n=== Full cursor section (already shown but confirming) ==="
sed -n '895,973p' scripts/modules/supported-models.json | grep -E '"id"|"max_tokens"'

Length of output: 7408


🏁 Script executed:

#!/bin/bash
# Check if any config files reference cursor provider

echo "=== Search for cursor in all config files ==="
find . -name "*.json" -type f | xargs grep -l "cursor" 2>/dev/null | grep -v node_modules | grep -v ".git"

echo -e "\n=== Search for provider configuration in codebase ==="
rg -n "provider.*cursor|cursor.*provider" . --type=js -i 2>/dev/null | head -10

echo -e "\n=== Check if PR changed any provider validation ==="
rg -n "isValidProvider|customProviders" scripts/modules/config-manager.js -A 5 -B 1

Length of output: 935


Now let me check the provider arrays and search for Cursor API documentation:


🏁 Script executed:

#!/bin/bash
# Find provider array definitions and check if cursor is included

echo "=== Search for CUSTOM_PROVIDERS_ARRAY definition ==="
rg -n "CUSTOM_PROVIDERS_ARRAY|CUSTOM_PROVIDERS\s*=" scripts/modules/config-manager.js -A 5 -B 2

echo -e "\n=== Search for VALIDATED_PROVIDERS definition ==="
rg -n "VALIDATED_PROVIDERS\s*=" scripts/modules/config-manager.js -A 8 -B 2

echo -e "\n=== Check CursorProvider implementation ==="
head -50 src/ai-providers/cursor.js

echo -e "\n=== Check how cursor is used in ai-services-unified.js ==="
sed -n '40,80p' scripts/modules/ai-services-unified.js

Length of output: 3584


🌐 Web query:

Cursor AI provider API model token limits specifications

💡 Result:

Short answer — Cursor itself doesn’t define a single universal token limit for LLMs. Token limits are determined by the model/provider you configure (OpenAI, Anthropic, Google, third‑party gateways), while Cursor may apply client‑side caps or defaults in its UI/plug‑ins. Key points:

  • Provider/model limits control context window and max response tokens (e.g., Claude/Gemini/GPT family limits). Cursor uses the provider API and therefore inherits those limits. [1][6]
  • Cursor can show/report token usage and may impose UI/default caps (users have reported a 1,024 response cap in some Cursor builds and the product team confirms token usage comes from the chosen provider). Test with your provider key to see exact behavior. [8][6]
  • Some Cursor tooling/config (examples in cursor‑tools, cursorifier, npm packages) allow setting per‑provider maxTokens or token‑count encodings in config files (e.g., maxTokens for Gemini/Perplexity entries). Those settings are client/configuration level and do not override a provider’s hard limit. [3][4]
  • Practical recommendation: check the provider’s model spec for exact token/context limits (e.g., OpenAI or Anthropic docs) and verify Cursor’s effective cap by testing with your API key or checking Cursor’s Dashboard/usage. Cursor forum threads and docs note that rate/size limits you experience usually come from the provider, not Cursor. [6][9]

Sources

  • Cursor community/forum discussions (limits vs provider; truncation reports). [6][8]
  • cursor‑tools / cursorifier npm docs showing configurable maxTokens per provider. [3][4]
  • Cursor docs/community notes about token usage reporting and that provider limits apply. [9]

If you want, tell me which provider/model you use (OpenAI, Anthropic, Google, etc.) and I’ll look up that model’s exact token/context limits and show how Cursor interacts with them.


Let me check the provider constants to ensure cursor provider is properly recognized:


🏁 Script executed:

#!/bin/bash
# Find and view provider constants

echo "=== View provider constants file ==="
cat src/constants/providers.js

echo -e "\n=== Verify cursor is properly referenced ==="
rg -n "cursor" src/constants/providers.js -i

Length of output: 972


Cursor provider is incomplete and will fail validation—add it to provider constants and verify model metadata accuracy.

The cursor provider has been partially integrated but is missing a critical registration step and has model metadata inconsistencies:

Critical blocker: Cursor is not registered in src/constants/providers.js. It must be added to either VALIDATED_PROVIDERS or CUSTOM_PROVIDERS to pass validation. Currently, validateProvider('cursor') will return false because cursor appears in neither array. Since cursor has predefined model lists in supported-models.json, add it to VALIDATED_PROVIDERS:

export const VALIDATED_PROVIDERS = [
	'anthropic',
	'openai',
	'google',
	'perplexity',
	'xai',
	'groq',
	'mistral',
	'cursor'  // Add this line
];

Model metadata concerns:

  • Model versions are outdated: Cursor section contains 2024 model versions (e.g., claude-3-opus-20240229, claude-3-5-sonnet-20241022) while the anthropic section has newer 2025 versions (e.g., claude-opus-4-20250514, claude-3-7-sonnet-20250219). Verify whether Cursor API actually supports these older versions or if the cursor section should expose the same current versions as other providers.
  • Max token discrepancies: claude-3-opus-20240229 shows max_tokens: 4096 in cursor, yet the anthropic section lists the newer opus variant with 32000. Clarify whether this represents an actual Cursor API constraint or a data error.
  • gpt-3.5-turbo role restriction: This model is the only cursor entry restricted to ["fallback"] role only, while all others allow ["main", "fallback"]. Verify if this is intentional based on Cursor's model capabilities or a configuration error.
🤖 Prompt for AI Agents
In scripts/modules/supported-models.json around lines 895 to 973 the "cursor"
provider block is present but the provider itself is not registered, causing
validateProvider('cursor') to fail; add "cursor" to VALIDATED_PROVIDERS in
src/constants/providers.js (or CUSTOM_PROVIDERS if intentionally unvalidated) so
it passes validation, then audit and align the cursor model metadata: confirm
current supported model IDs/versions with the Cursor API (replace outdated 2024
IDs with the correct 2025 variants if needed), correct max_tokens values to
reflect Cursor-specific limits (or sync them with anthropic if Cursor proxies
the same models), and verify the gpt-3.5-turbo allowed_roles is correct (change
from ["fallback"] to ["main","fallback"] only if Cursor supports it); update
supported-models.json accordingly and run validation tests.

Comment on lines +33 to +53
getClient(params) {
try {
const { apiKey, baseURL } = params;

if (!apiKey) {
throw new Error('Cursor API key is required.');
}

// Cursor API is OpenAI-compatible and uses the endpoint at api.cursor.sh
// Default to Cursor's API endpoint unless a custom baseURL is provided
const cursorBaseURL = baseURL || 'https://api.cursor.sh/v1';

return createOpenAI({
apiKey,
baseURL: cursorBaseURL,
compatibility: 'compatible' // Ensures compatibility with OpenAI-like APIs
});
} catch (error) {
this.handleError('client initialization', error);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify Cursor API endpoint and compatibility mode.

The implementation looks correct, but please verify two details:

  1. Base URL: Confirm that https://api.cursor.sh/v1 is the correct Cursor API endpoint
  2. Compatibility Mode: Verify that compatibility: 'compatible' is the appropriate setting for Cursor's OpenAI-compatible API

The Vercel AI SDK documentation for @ai-sdk/openai supports different compatibility modes. If Cursor's API has any quirks or differences from standard OpenAI, you may need to adjust this setting.


🏁 Script executed:

#!/bin/bash
# Check if there's any Cursor API documentation in the codebase
rg -i "cursor.*api|api.*cursor" -g "*.md" -g "*.txt" -C3

Length of output: 2190


🌐 Web query:

What is the Cursor API endpoint URL and OpenAI compatibility mode?

💡 Result:

  • Cursor public API base URL: https://api.cursor.com. [1]
  • To run v0 in Cursor as an OpenAI-compatible provider, set the OpenAI base URL to the v0 endpoint: https://api.v0.dev/v1 (i.e., use Cursor’s “Override OpenAI Base URL” and point it at that endpoint). [2][3]

Sources:
[1] Cursor AI docs — AI Code Tracking / API base URL.
[2] v0.dev Cursor integration docs.
[3] v0.app Cursor integration docs.


🌐 Web query:

Vercel AI SDK @ai-sdk/openai createOpenAI compatibility modes

💡 Result:

Short answer — createOpenAI supports two compatibility modes in AI SDK v4: "strict" and "compatible". Use compatibility: 'strict' when talking to the real OpenAI API (enables newer fields like streamOptions and accurate token counts); use compatibility: 'compatible' for OpenAI‑compatible third‑party gateways (omits newer OpenAI‑only fields). In AI SDK v5 the compatibility option was removed and strict behavior is the default. [1][2][3]

Example (v4):
import { createOpenAI } from '@ai-sdk/openai';
const openai = createOpenAI({ compatibility: 'strict' }); // or 'compatible'. [1][2][3]

Sources:

  • AI SDK OpenAI provider docs (compatibility option details). [1]
  • AI SDK troubleshooting note about strict mode enabling token counts in streams. [2]
  • AI SDK migration guide noting removal of compatibility option in v5 (strict default). [3]

🏁 Script executed:

#!/bin/bash
# Check AI SDK version and look for other Cursor-related code/config
cat package.json | grep -A5 -B5 "@ai-sdk"
echo "---"
rg -i "cursor" --type=js -C2 -g "!node_modules"

Length of output: 124835


Fix Cursor API endpoint—the base URL is incorrect.

The endpoint https://api.cursor.sh/v1 does not match the documented Cursor API base URL. The Cursor public API base URL is https://api.cursor.com.

Update line 43 in src/ai-providers/cursor.js:

const cursorBaseURL = baseURL || 'https://api.cursor.com/v1';

The compatibility: 'compatible' setting is correct for this use case.

🤖 Prompt for AI Agents
In src/ai-providers/cursor.js around lines 33 to 53, the default Cursor API base
URL is incorrect; replace the hardcoded 'https://api.cursor.sh/v1' with the
documented Cursor public API base URL by defaulting to
'https://api.cursor.com/v1' (i.e., const cursorBaseURL = baseURL ||
'https://api.cursor.com/v1'); keep the existing apiKey check, error handling,
and compatibility: 'compatible' setting unchanged.

This commit updates the supported-models.json configuration to allow Cursor
provider models to be used for all AI roles, including research operations.

Changes:
- Updated all Cursor models to support main, research, and fallback roles
- Enables users with only Cursor API key to use all Task Master features
- Maintains backward compatibility with existing configurations

Benefits:
- Users no longer need multiple AI provider API keys
- Single Cursor API key can handle all AI operations
- Consistent experience across all Task Master commands
@ankitpro ankitpro closed this Oct 24, 2025
@ankitpro ankitpro deleted the fix/api-key-configuration-docs branch October 24, 2025 12:14
@Crunchyman-ralph
Copy link
Collaborator

@ankitpro if you think this is still important for reviewing, let me know, I'm around

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants