Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 175 additions & 0 deletions src/main/presenter/configPresenter/providerModelSettings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2336,6 +2336,181 @@ export const providerModelSettings: Record<string, { models: ProviderModelSettin
vision: false,
functionCall: false,
reasoning: false
},
// GPT-5 系列模型配置
{
id: 'gpt-5-chat',
name: 'OpenAI: GPT-5 Chat',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-chat'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
Comment on lines +2342 to +2400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Harden model ID matching to prevent future collisions

Even after reordering, substring matching with includes is brittle. A safer matcher reduces footguns across all providers and variants.

Consider updating the matcher (outside this diff hunk) to prefer exact IDs and only allow suffix date variants. Example:

// src/main/presenter/configPresenter/providerModelSettings.ts
// Replace the includes() check with:
if (config.match.some((matchStr) => {
  const ms = matchStr.toLowerCase()
  return (
    lowerModelId === ms ||
    lowerModelId.startsWith(ms + '-') || // e.g. openai/gpt-5-2025-08-07
    lowerModelId.startsWith(ms + ':')    // e.g. ...:free
  )
})) {
  // ...
}

This keeps 'openai/gpt-5' from matching 'openai/gpt-5-mini' while still allowing date-suffixed variants.

If you want, I can open a follow-up PR to apply and validate this across the repository.

Also applies to: 2403-2457, 2460-2514


🏁 Script executed:

#!/bin/bash
# Locate where the providerModelSettings is referenced
echo "=== Searching for providerModelSettings usage ==="
rg -n "providerModelSettings" -C3

# Find where the `match` array is used to select or filter models
echo "=== Searching for config.match.some usage ==="
rg -nP "config\.match\.some" -C3

# Find any includes() calls within the presenter directory
echo "=== Searching for includes( usage in presenter folder ==="
rg -nP "includes\(" -g "src/main/presenter/**/*.ts" -C3

Length of output: 77074


Harden model-ID matching in both default and provider configs

Substring-based includes() allows collisions (e.g. “gpt-5” matching “gpt-5-mini”). Update the two matchers to require exact IDs or only valid suffixes:

• src/main/presenter/configPresenter/modelConfig.ts (around line 136)
• src/main/presenter/configPresenter/providerModelSettings.ts (around line 2634)

Replace:

if (config.match.some((matchStr) => lowerModelId.includes(matchStr.toLowerCase()))) {
  
}

With:

if (config.match.some((matchStr) => {
  const ms = matchStr.toLowerCase()
  return (
    lowerModelId === ms ||
    lowerModelId.startsWith(ms + '-') || // date or variant suffix
    lowerModelId.startsWith(ms + ':')    // tag suffix
  )
})) {
  
}

This ensures “openai/gpt-5” no longer matches “openai/gpt-5-mini,” yet still supports date- or tag-appended IDs.

🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines
2342-2400 (and also update src/main/presenter/configPresenter/modelConfig.ts
around line 136), the current substring-based matching using includes() can
cause collisions like "gpt-5" matching "gpt-5-mini"; change the matcher so each
match string is normalized to lowercase and only returns true when the
lowerModelId is exactly equal to the match, or when lowerModelId starts with the
match plus '-' (for date/variant suffixes) or starts with the match plus ':'
(for tag suffixes); replace the includes-based predicate with this stricter
equality/startsWith logic in both locations.

Comment on lines +2357 to +2400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix substring matching collisions for GPT‑5 entries (gpt‑5 matches gpt‑5-mini/nano)

getProviderSpecificModelConfig uses substring matching (includes). With the current order, 'openai/gpt-5' will match 'openai/gpt-5-mini' and 'openai/gpt-5-nano' before their specific blocks are reached, returning the wrong config. Reorder specific variants before the generic gpt‑5.

Apply this diff to reorder GPT‑5 entries:

       {
-        id: 'gpt-5',
-        name: 'OpenAI: GPT-5',
-        temperature: 0.7,
-        maxTokens: 128000,
-        contextLength: 400000,
-        match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
-        vision: true,
-        functionCall: true,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        verbosity: 'medium',
-        maxCompletionTokens: 128000,
-        type: ModelType.Chat
-      },
-      {
         id: 'gpt-5-mini',
         name: 'OpenAI: GPT-5 Mini',
         temperature: 0.7,
         maxTokens: 128000,
         contextLength: 400000,
         match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000,
         type: ModelType.Chat
       },
       {
         id: 'gpt-5-nano',
         name: 'OpenAI: GPT-5 Nano',
         temperature: 0.7,
         maxTokens: 128000,
         contextLength: 400000,
         match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000,
         type: ModelType.Chat
-      },
+      },
+      {
+        id: 'gpt-5',
+        name: 'OpenAI: GPT-5',
+        temperature: 0.7,
+        maxTokens: 128000,
+        contextLength: 400000,
+        match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
+        vision: true,
+        functionCall: true,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        verbosity: 'medium',
+        maxCompletionTokens: 128000,
+        type: ModelType.Chat
+      },

Additionally, consider hardening the matcher to avoid substring collisions (see separate suggestion below).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2357
to 2400, the generic 'gpt-5' entry can be matched before its specific variants
due to substring matching; fix by moving the 'gpt-5-mini' and 'gpt-5-nano'
objects so they appear before the generic 'gpt-5' object in the models array
(ensure their full match strings remain unchanged), and as an additional
hardening step update the provider matching logic to prefer exact/anchored
matches or longest-match-first (e.g., exact equality or regex anchors) instead
of simple includes so specific model IDs are matched before generic prefixes.

// O1 系列模型配置
{
id: 'o1-mini',
name: 'OpenAI: o1-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o1-mini', 'openai/o1-mini-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o1',
name: 'OpenAI: o1',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-pro',
name: 'OpenAI: o1-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-preview',
name: 'OpenAI: o1-preview',
temperature: 1,
maxTokens: 32768,
contextLength: 128000,
match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 32768,
type: ModelType.Chat
},
Comment on lines +2417 to +2457
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix substring matching collisions for O1 entries (o1 matches o1-preview/pro)

Because matching uses includes, 'openai/o1' will match 'openai/o1-preview' and 'openai/o1-pro' first. Reorder to list the specific variants before the generic 'o1'.

Apply this diff to reorder O1 entries:

       {
         id: 'o1-mini',
         name: 'OpenAI: o1-mini',
         temperature: 1,
         maxTokens: 65536,
         contextLength: 128000,
         match: ['openai/o1-mini', 'openai/o1-mini-2024-09-12'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         maxCompletionTokens: 65536,
         type: ModelType.Chat
       },
-      {
-        id: 'o1',
-        name: 'OpenAI: o1',
-        temperature: 1,
-        maxTokens: 100000,
-        contextLength: 200000,
-        match: ['openai/o1'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 100000,
-        type: ModelType.Chat
-      },
-      {
-        id: 'o1-pro',
-        name: 'OpenAI: o1-pro',
-        temperature: 1,
-        maxTokens: 100000,
-        contextLength: 200000,
-        match: ['openai/o1-pro'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 100000,
-        type: ModelType.Chat
-      },
       {
         id: 'o1-preview',
         name: 'OpenAI: o1-preview',
         temperature: 1,
         maxTokens: 32768,
         contextLength: 128000,
         match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         maxCompletionTokens: 32768,
         type: ModelType.Chat
       },
+      {
+        id: 'o1-pro',
+        name: 'OpenAI: o1-pro',
+        temperature: 1,
+        maxTokens: 100000,
+        contextLength: 200000,
+        match: ['openai/o1-pro'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 100000,
+        type: ModelType.Chat
+      },
+      {
+        id: 'o1',
+        name: 'OpenAI: o1',
+        temperature: 1,
+        maxTokens: 100000,
+        contextLength: 200000,
+        match: ['openai/o1'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 100000,
+        type: ModelType.Chat
+      },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'o1',
name: 'OpenAI: o1',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-pro',
name: 'OpenAI: o1-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-preview',
name: 'OpenAI: o1-preview',
temperature: 1,
maxTokens: 32768,
contextLength: 128000,
match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 32768,
type: ModelType.Chat
},
{
id: 'o1-mini',
name: 'OpenAI: o1-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o1-mini', 'openai/o1-mini-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o1-preview',
name: 'OpenAI: o1-preview',
temperature: 1,
maxTokens: 32768,
contextLength: 128000,
match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 32768,
type: ModelType.Chat
},
{
id: 'o1-pro',
name: 'OpenAI: o1-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1',
name: 'OpenAI: o1',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2417
to 2457, the model matching uses substring includes so the generic entry with
match ['openai/o1'] will incorrectly match o1-preview and o1-pro; reorder the
array so the more specific entries ('openai/o1-preview', 'openai/o1-pro'
variants) appear before the generic 'openai/o1' entry to avoid collisions, i.e.,
move the o1-preview and o1-pro objects above the o1 object so specific matches
are checked first.

// O3 系列模型配置
{
id: 'o3',
name: 'OpenAI: o3',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o3-mini',
name: 'OpenAI: o3-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-mini-high',
name: 'OpenAI: o3-mini-high',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini-high'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'high',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-pro',
name: 'OpenAI: o3-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
}
Comment on lines +2460 to 2514
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix substring matching collisions for O3 entries (o3 matches o3-mini/pro)

Same collision pattern: 'openai/o3' will match 'openai/o3-mini', 'openai/o3-mini-high', and 'openai/o3-pro'. Reorder specific variants before generic 'o3'.

Apply this diff:

-      {
-        id: 'o3',
-        name: 'OpenAI: o3',
-        temperature: 1,
-        maxTokens: 100000,
-        contextLength: 200000,
-        match: ['openai/o3'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 100000,
-        type: ModelType.Chat
-      },
-      {
-        id: 'o3-mini',
-        name: 'OpenAI: o3-mini',
-        temperature: 1,
-        maxTokens: 65536,
-        contextLength: 128000,
-        match: ['openai/o3-mini'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 65536,
-        type: ModelType.Chat
-      },
       {
         id: 'o3-mini-high',
         name: 'OpenAI: o3-mini-high',
         temperature: 1,
         maxTokens: 65536,
         contextLength: 128000,
         match: ['openai/o3-mini-high'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'high',
         maxCompletionTokens: 65536,
         type: ModelType.Chat
       },
+      {
+        id: 'o3-mini',
+        name: 'OpenAI: o3-mini',
+        temperature: 1,
+        maxTokens: 65536,
+        contextLength: 128000,
+        match: ['openai/o3-mini'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 65536,
+        type: ModelType.Chat
+      },
       {
         id: 'o3-pro',
         name: 'OpenAI: o3-pro',
         temperature: 1,
         maxTokens: 100000,
         contextLength: 200000,
         match: ['openai/o3-pro'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         maxCompletionTokens: 100000,
         type: ModelType.Chat
-      }
+      },
+      {
+        id: 'o3',
+        name: 'OpenAI: o3',
+        temperature: 1,
+        maxTokens: 100000,
+        contextLength: 200000,
+        match: ['openai/o3'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 100000,
+        type: ModelType.Chat
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'o3',
name: 'OpenAI: o3',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o3-mini',
name: 'OpenAI: o3-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-mini-high',
name: 'OpenAI: o3-mini-high',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini-high'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'high',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-pro',
name: 'OpenAI: o3-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
}
{
id: 'o3-mini-high',
name: 'OpenAI: o3-mini-high',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini-high'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'high',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-mini',
name: 'OpenAI: o3-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-pro',
name: 'OpenAI: o3-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o3',
name: 'OpenAI: o3',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
}
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2460
to 2514, the generic 'o3' match ('openai/o3') will substring-match more specific
variants ('openai/o3-mini', 'openai/o3-mini-high', 'openai/o3-pro'); reorder the
model entries so the specific variants (o3-mini-high, o3-mini, o3-pro) appear
before the generic 'o3' entry to ensure exact matching precedence, preserving
all other fields as-is.

]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
modelId.includes('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
Comment on lines 217 to 223
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Select max_completion_tokens using normalized ID (covers OpenRouter IDs)

Use a normalizedModelId for o1/o3/o4/GPT‑5 detection. This fixes cases like openai/o3-... and openai/gpt-5-... coming from OpenRouter.

-const requestParams: OpenAI.Chat.ChatCompletionCreateParams = {
+const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+const requestParams: OpenAI.Chat.ChatCompletionCreateParams = {
   messages: this.formatMessages(messages),
   model: modelId,
   stream: false,
   temperature: temperature,
-  ...(modelId.startsWith('o1') ||
-  modelId.startsWith('o3') ||
-  modelId.startsWith('o4') ||
-  modelId.includes('gpt-5')
+  ...(normalizedModelId.startsWith('o1') ||
+  normalizedModelId.startsWith('o3') ||
+  normalizedModelId.startsWith('o4') ||
+  normalizedModelId.startsWith('gpt-5')
     ? { max_completion_tokens: maxTokens }
     : { max_tokens: maxTokens })
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
modelId.includes('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
const requestParams: OpenAI.Chat.ChatCompletionCreateParams = {
messages: this.formatMessages(messages),
model: modelId,
stream: false,
temperature: temperature,
...(normalizedModelId.startsWith('o1') ||
normalizedModelId.startsWith('o3') ||
normalizedModelId.startsWith('o4') ||
normalizedModelId.startsWith('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 217 to 223, the branch that chooses max_completion_tokens vs
max_tokens currently checks modelId directly and misses OpenRouter-style ids
like "openai/o3-..." or "openai/gpt-5-..."; normalize the id first (e.g., take
the segment after any '/' and lowercase it) into a variable like
normalizedModelId, then use normalizedModelId.startsWith('o1'|'o3'|'o4') or
normalizedModelId.includes('gpt-5') to decide to use max_completion_tokens,
otherwise use max_tokens.

Expand Down Expand Up @@ -538,7 +538,7 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
modelId.includes('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
Comment on lines +541 to 544
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Normalize ID for token-field selection in streaming path

Mirror the non-streaming path to cover OpenRouter IDs.

-...(modelId.startsWith('o1') ||
-  modelId.startsWith('o3') ||
-  modelId.startsWith('o4') ||
-  modelId.includes('gpt-5')
+const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+...(normalizedModelId.startsWith('o1') ||
+  normalizedModelId.startsWith('o3') ||
+  normalizedModelId.startsWith('o4') ||
+  normalizedModelId.startsWith('gpt-5')
   ? { max_completion_tokens: maxTokens }
   : { max_tokens: maxTokens })

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 541 to 544, the streaming path chooses the token-field based on
modelId but doesn't normalize the ID like the non-streaming path does, so
OpenRouter-style IDs (provider/model) aren't matched; normalize the modelId
first (e.g., lowercase and strip any provider prefix by taking substring after
the last '/' or similar) and then use that normalized id when checking
includes('gpt-5') to decide between max_completion_tokens and max_tokens,
mirroring the non-streaming logic.

Expand Down Expand Up @@ -568,7 +568,7 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
}

// verbosity 仅支持 GPT-5 系列模型
if (modelId.startsWith('gpt-5') && modelConfig.verbosity) {
if (modelId.includes('gpt-5') && modelConfig.verbosity) {
;(requestParams as any).verbosity = modelConfig.verbosity
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
}

// verbosity 仅支持 GPT-5 系列模型
if (modelId.startsWith('gpt-5') && modelConfig.verbosity) {
if (modelId.includes('gpt-5') && modelConfig.verbosity) {
;(requestParams as any).text = {
verbosity: modelConfig.verbosity
}
Expand Down Expand Up @@ -580,7 +580,7 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
}

// verbosity 仅支持 GPT-5 系列模型
if (modelId.startsWith('gpt-5') && modelConfig.verbosity) {
if (modelId.includes('gpt-5') && modelConfig.verbosity) {
;(requestParams as any).text = {
verbosity: modelConfig.verbosity
}
Expand Down
2 changes: 1 addition & 1 deletion src/renderer/src/components/ChatConfig.vue
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ const showThinkingBudget = computed(() => {

const isGPT5Model = computed(() => {
const modelId = props.modelId?.toLowerCase() || ''
return modelId.startsWith('gpt-5')
return modelId.includes('gpt-5')
})

// 判断模型是否支持 reasoningEffort 参数
Expand Down
2 changes: 1 addition & 1 deletion src/renderer/src/components/settings/ModelConfigDialog.vue
Original file line number Diff line number Diff line change
Expand Up @@ -491,7 +491,7 @@ const getThinkingBudgetConfig = (modelId: string) => {

const isGPT5Model = computed(() => {
const modelId = props.modelId.toLowerCase()
return modelId.startsWith('gpt-5')
return modelId.includes('gpt-5')
})

const supportsReasoningEffort = computed(() => {
Expand Down