Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions src/main/presenter/configPresenter/modelConfig.ts
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,10 @@ export class ModelConfigHelper {
functionCall: config.functionCall || false,
reasoning: config.reasoning || false,
type: config.type || ModelType.Chat,
thinkingBudget: config.thinkingBudget
thinkingBudget: config.thinkingBudget,
reasoningEffort: config.reasoningEffort,
verbosity: config.verbosity,
maxCompletionTokens: config.maxCompletionTokens
}
break
}
Expand All @@ -160,7 +163,10 @@ export class ModelConfigHelper {
functionCall: false,
reasoning: false,
type: ModelType.Chat,
thinkingBudget: undefined
thinkingBudget: undefined,
reasoningEffort: undefined,
verbosity: undefined,
maxCompletionTokens: undefined
}
}
}
Expand Down
52 changes: 52 additions & 0 deletions src/main/presenter/configPresenter/modelDefaultSettings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -865,6 +865,58 @@ export const defaultModelsSettings: DefaultModelSetting[] = [
functionCall: false,
reasoning: true
},
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
maxTokens: 16384,
contextLength: 272000,
match: ['gpt-5-chat', 'gpt-5-chat-latest'],
vision: true,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 16384
},
{
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5',
name: 'GPT-5',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5', 'gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
Comment on lines +868 to +919
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Remove maxTokens for GPT‑5 defaults; keep only maxCompletionTokens.

To fully honor “replace max_tokens with max_completion_tokens” for GPT‑5, don’t set maxTokens here. This avoids UI/provider ambiguity. (Make maxTokens optional in DefaultModelSetting accordingly.)

   {
     id: 'gpt-5-chat',
     name: 'GPT-5 Chat',
-    maxTokens: 16384,
     contextLength: 272000,
     match: ['gpt-5-chat', 'gpt-5-chat-latest'],
     vision: true,
     functionCall: false,
     reasoning: true,
     reasoningEffort: 'medium',
     verbosity: 'medium',
     maxCompletionTokens: 16384
   },
   {
     id: 'gpt-5-mini',
     name: 'GPT-5 Mini',
-    maxTokens: 128000,
     contextLength: 272000,
     match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
     vision: true,
     functionCall: true,
     reasoning: true,
     reasoningEffort: 'medium',
     verbosity: 'medium',
     maxCompletionTokens: 128000
   },
   {
     id: 'gpt-5-nano',
     name: 'GPT-5 Nano',
-    maxTokens: 128000,
     contextLength: 272000,
     match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
     vision: true,
     functionCall: true,
     reasoning: true,
     reasoningEffort: 'medium',
     verbosity: 'medium',
     maxCompletionTokens: 128000
   },
   {
     id: 'gpt-5',
     name: 'GPT-5',
-    maxTokens: 128000,
     contextLength: 272000,
     match: ['gpt-5', 'gpt-5-2025-08-07'],
     vision: true,
     functionCall: true,
     reasoning: true,
     reasoningEffort: 'medium',
     verbosity: 'medium',
     maxCompletionTokens: 128000
   },

Please also verify that the shared types support maxCompletionTokens and do not require maxTokens for GPT‑5:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Check shared types for maxCompletionTokens/maxTokens:"
rg -n --pretty "interface .*Model.*|type .*Model.*|maxCompletionTokens|maxTokens|temperature" src/shared

echo
echo "Ensure renderer uses maxCompletionTokens for GPT‑5 and hides temperature:"
rg -n -A3 -B3 --pretty "gpt-5|maxCompletionTokens|max_completion_tokens|temperature" src/renderer

Length of output: 19610


Enforce GPT-5 defaults: remove maxTokens, use only maxCompletionTokens

The current changes only adjust defaults in modelDefaultSettings.ts, but the shared types and renderer haven’t been updated. Please address the following:

• src/main/presenter/configPresenter/modelDefaultSettings.ts
Remove maxTokens entries for all GPT-5 models, e.g.:

  {
    id: 'gpt-5-chat',
    name: 'GPT-5 Chat',
-   maxTokens: 16384,
    contextLength: 272000,
    …
    maxCompletionTokens: 16384
  },

• src/shared/presenter.d.ts
Make maxTokens optional in both ModelConfig and DefaultModelSetting so GPT-5 can omit it:

 export interface ModelConfig {
- maxTokens: number
+ maxTokens?: number
   temperature?: number
   maxCompletionTokens?: number // GPT-5 uses this instead of maxTokens
 }

 export interface DefaultModelSetting {
   temperature?: number
-  maxTokens: number
+  maxTokens?: number
   maxCompletionTokens?: number // GPT-5 uses this instead of maxTokens
   …
 }

• Renderer updates (add maxCompletionTokens, hide temperature/maxTokens for GPT-5):
stores/chat.ts: extend CONVERSATION_SETTINGS to include maxCompletionTokens, and in your initialization logic, pull config.maxCompletionTokens instead of config.maxTokens for GPT-5.
components/ChatConfig.vue, NewThread.vue, TitleView.vue:
- Wrap the temperature and max-tokens controls in v-if="!isGPT5Model"
- Add a slider/input bound to maxCompletionTokens shown only when isGPT5Model
ModelConfigDialog.vue already hides temperature for GPT-5; mirror similar logic for max-tokens vs. max-completion-tokens.

After these changes, GPT-5 models will have a single, unambiguous completion-budget parameter across the app.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/modelDefaultSettings.ts lines 868-919,
remove all maxTokens properties from GPT-5 model entries to enforce using only
maxCompletionTokens. Then, in src/shared/presenter.d.ts, update the ModelConfig
and DefaultModelSetting types to make maxTokens optional so GPT-5 models can
omit it. Next, in stores/chat.ts, extend CONVERSATION_SETTINGS to include
maxCompletionTokens and update initialization logic to use
config.maxCompletionTokens instead of config.maxTokens for GPT-5 models.
Finally, in components/ChatConfig.vue, NewThread.vue, and TitleView.vue,
conditionally hide temperature and maxTokens controls for GPT-5 models and add a
slider/input bound to maxCompletionTokens visible only for GPT-5, ensuring
ModelConfigDialog.vue mirrors this logic for consistency.

{
id: 'gpt-4.5-preview',
name: 'GPT-4.5 Preview',
Expand Down
66 changes: 63 additions & 3 deletions src/main/presenter/configPresenter/providerModelSettings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,70 @@ export interface ProviderModelSetting {
functionCall?: boolean // 是否支持函数调用
reasoning?: boolean // 是否支持推理能力
type?: ModelType // 模型类型,默认为Chat
// GPT-5 系列新参数
reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
verbosity?: 'low' | 'medium' | 'high'
maxCompletionTokens?: number // GPT-5 系列使用此参数替代 maxTokens
Comment on lines +16 to +19
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Make maxTokens optional in schema to align with GPT‑5 (no max_tokens).

Keeping maxTokens required forces GPT‑5 entries to set it, increasing the chance of accidentally sending max_tokens. Make it optional so GPT‑5 can omit it, and let providers prefer maxCompletionTokens.

Proposed interface tweak (outside the selected lines) to unblock GPT‑5 objects omitting maxTokens:

export interface ProviderModelSetting {
  id: string
  name: string
  match: string[]
  maxTokens?: number // now optional; GPT‑5 uses maxCompletionTokens
  contextLength: number
  temperature?: number
  vision?: boolean
  functionCall?: boolean
  reasoning?: boolean
  type?: ModelType
  // GPT‑5 series parameters
  reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
  verbosity?: 'low' | 'medium' | 'high'
  maxCompletionTokens?: number
}

Also ensure DefaultModelSetting/ModelConfig mirror this optionality.

🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 16
to 19, the maxTokens property in the ProviderModelSetting interface should be
made optional to align with GPT-5 usage where max_tokens is not required. Update
the interface definition to mark maxTokens as optional by adding a question
mark, and ensure that related types like DefaultModelSetting and ModelConfig
also reflect this optionality to allow GPT-5 models to omit maxTokens and use
maxCompletionTokens instead.

}

// 为每个提供商创建映射对象,使用models数组包装模型配置
export const providerModelSettings: Record<string, { models: ProviderModelSetting[] }> = {
// OpenAI提供商特定模型配置
openai: {
models: []
models: [
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
maxTokens: 16384,
contextLength: 272000,
match: ['gpt-5-chat', 'gpt-5-chat-latest'],
vision: true,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 16384
},
{
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5',
name: 'GPT-5',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5', 'gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
}
]
Comment on lines +26 to +79
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid setting maxTokens for GPT‑5 models; rely on maxCompletionTokens.

To prevent accidental use of max_tokens, drop maxTokens in GPT‑5 entries (once the type is optional). Temperature is already omitted — good.

       {
         id: 'gpt-5-chat',
         name: 'GPT-5 Chat',
-        maxTokens: 16384,
         contextLength: 272000,
         match: ['gpt-5-chat', 'gpt-5-chat-latest'],
         vision: true,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 16384
       },
       {
         id: 'gpt-5-mini',
         name: 'GPT-5 Mini',
-        maxTokens: 128000,
         contextLength: 272000,
         match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000
       },
       {
         id: 'gpt-5-nano',
         name: 'GPT-5 Nano',
-        maxTokens: 128000,
         contextLength: 272000,
         match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000
       },
       {
         id: 'gpt-5',
         name: 'GPT-5',
-        maxTokens: 128000,
         contextLength: 272000,
         match: ['gpt-5', 'gpt-5-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
models: [
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
maxTokens: 16384,
contextLength: 272000,
match: ['gpt-5-chat', 'gpt-5-chat-latest'],
vision: true,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 16384
},
{
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5',
name: 'GPT-5',
maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5', 'gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
}
]
models: [
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
contextLength: 272000,
match: ['gpt-5-chat', 'gpt-5-chat-latest'],
vision: true,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 16384
},
{
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
contextLength: 272000,
match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
contextLength: 272000,
match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5',
name: 'GPT-5',
contextLength: 272000,
match: ['gpt-5', 'gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
}
]
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts between lines 26
and 79, the GPT-5 model entries incorrectly include the maxTokens property.
Remove the maxTokens field from all GPT-5 model objects to rely solely on
maxCompletionTokens as intended, ensuring the type allows maxTokens to be
optional.

},

// 火山引擎(Doubao)提供商特定模型配置
Expand Down Expand Up @@ -2390,11 +2447,14 @@ export function getProviderSpecificModelConfig(
return {
maxTokens: config.maxTokens,
contextLength: config.contextLength,
temperature: config.temperature || 0.7,
temperature: config.temperature, // 保持可选,某些模型不支持
vision: config.vision || false,
functionCall: config.functionCall || false,
reasoning: config.reasoning || false,
type: config.type || ModelType.Chat
type: config.type || ModelType.Chat,
reasoningEffort: config.reasoningEffort,
verbosity: config.verbosity,
maxCompletionTokens: config.maxCompletionTokens
}
Comment on lines +2450 to 2458
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Guard against sending temperature for GPT‑5 and prefer maxCompletionTokens.

Even if upstream accidentally provides a temperature, GPT‑5 must not receive it. Also, ensure downstream logic prioritizes maxCompletionTokens for GPT‑5.

-        temperature: config.temperature, // 保持可选,某些模型不支持
+        // GPT‑5 does not support temperature; drop it for GPT‑5 models
+        temperature: lowerModelId.includes('gpt-5') ? undefined : config.temperature,
         vision: config.vision || false,
         functionCall: config.functionCall || false,
         reasoning: config.reasoning || false,
         type: config.type || ModelType.Chat,
         reasoningEffort: config.reasoningEffort,
         verbosity: config.verbosity,
         maxCompletionTokens: config.maxCompletionTokens

Run the script below to confirm no code path still defaults/forces temperature for GPT‑5 and that providers send max_completion_tokens:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Search places that default temperature:"
rg -n --pretty "(default.*temperature|temperature\s*=\s*0\.7|temperature:\s*0\.7)"

echo
echo "Search GPT-5 handling in providers (ensure no temperature in payload):"
rg -n --pretty "gpt-5|gpt5|gpt_5" src | sed -n '1,200p'

echo
echo "Search payload keys used for token limits:"
rg -n --pretty "max_completion_tokens|maxCompletionTokens|max_tokens|maxTokens"

echo
echo "OpenAI provider request construction around token params:"
rg -n -A5 -B5 --pretty "max_completion_tokens|max_tokens" src/main/presenter/llmProviderPresenter

Length of output: 54394


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Extract OPENAI_REASONING_MODELS constant"
rg -n --pretty "OPENAI_REASONING_MODELS" -A5 src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts

Length of output: 892


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Snippet around line 1580 in threadPresenter:"
sed -n '1550,1620p' src/main/presenter/threadPresenter/index.ts

echo
echo "Snippet around line 1640 in threadPresenter:"
sed -n '1620,1700p' src/main/presenter/threadPresenter/index.ts

Length of output: 5336


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "===== Top of providerModelSettings.ts ====="
sed -n '1,200p' src/main/presenter/configPresenter/providerModelSettings.ts

echo
echo "===== Mapping function around lines 2300–2500 ====="
sed -n '2300,2500p' src/main/presenter/configPresenter/providerModelSettings.ts

Length of output: 10143


Drop temperature for GPT-5 in providerModelSettings

The helper in src/main/presenter/configPresenter/providerModelSettings.ts still blindly returns config.temperature, which—even if undefined—could end up in payloads. Let’s explicitly omit it for any model ID containing “gpt-5” and rely on maxCompletionTokens instead.

Locations to update:
getProviderSpecificModelConfig in src/main/presenter/configPresenter/providerModelSettings.ts (around the final return).

Suggested diff:

--- a/src/main/presenter/configPresenter/providerModelSettings.ts
+++ b/src/main/presenter/configPresenter/providerModelSettings.ts
@@ export function getProviderSpecificModelConfig(
-      temperature: config.temperature, // 保持可选,某些模型不支持
+      // GPT-5 does not accept a temperature parameter—always drop it
+      temperature: lowerModelId.includes('gpt-5') ? undefined : config.temperature,
       vision: config.vision || false,
       functionCall: config.functionCall || false,
       reasoning: config.reasoning || false,
       type: config.type || ModelType.Chat,
       reasoningEffort: config.reasoningEffort,
       verbosity: config.verbosity,
       maxCompletionTokens: config.maxCompletionTokens

This change ensures GPT-5 models never carry a temperature key, and downstream code will automatically use max_completion_tokens.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
temperature: config.temperature, // 保持可选,某些模型不支持
vision: config.vision || false,
functionCall: config.functionCall || false,
reasoning: config.reasoning || false,
type: config.type || ModelType.Chat
type: config.type || ModelType.Chat,
reasoningEffort: config.reasoningEffort,
verbosity: config.verbosity,
maxCompletionTokens: config.maxCompletionTokens
}
// GPT-5 does not accept a temperature parameter—always drop it
temperature: lowerModelId.includes('gpt-5') ? undefined : config.temperature,
vision: config.vision || false,
functionCall: config.functionCall || false,
reasoning: config.reasoning || false,
type: config.type || ModelType.Chat,
reasoningEffort: config.reasoningEffort,
verbosity: config.verbosity,
maxCompletionTokens: config.maxCompletionTokens
}
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2450
to 2458, update the getProviderSpecificModelConfig function to conditionally
exclude the temperature property when the model ID contains "gpt-5". Instead of
always including temperature from config, add a check to omit temperature for
GPT-5 models so it does not appear in the payload. Keep maxCompletionTokens as
the controlling parameter for GPT-5. This prevents sending an unsupported
temperature field for GPT-5 models.

}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,11 @@ const OPENAI_REASONING_MODELS = [
'o1-mini',
'o1-pro',
'o1-preview',
'o1'
'o1',
'gpt-5',
'gpt-5-mini',
'gpt-5-nano',
'gpt-5-chat'
]
const OPENAI_IMAGE_GENERATION_MODELS = [
'gpt-4o-all',
Expand Down Expand Up @@ -210,7 +214,10 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
model: modelId,
stream: false,
temperature: temperature,
...(modelId.startsWith('o1') || modelId.startsWith('o3') || modelId.startsWith('o4')
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
Comment on lines +217 to 223
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

DRY up token-limit parameter branching

The startsWith(...) ? { max_completion_tokens } : { max_tokens } logic now appears twice and will likely be copied again for future families.

Extracting into a helper prevents drift:

function getTokenParam(modelId: string, tokens: number) {
  return modelId.startsWith('o1') ||
         modelId.startsWith('o3') ||
         modelId.startsWith('o4') ||
         modelId.startsWith('gpt-5')
    ? { max_completion_tokens: tokens }
    : { max_tokens: tokens };
}

Then:

- ...,
- ...(modelId.startsWith('o1') || ... ? { ... } : { ... })
+ ..., 
+ ...getTokenParam(modelId, maxTokens)
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 217 to 223, the logic for choosing between max_completion_tokens
and max_tokens based on modelId prefixes is duplicated. Refactor by extracting
this conditional into a helper function that takes modelId and token count as
parameters and returns the appropriate object. Replace the duplicated inline
logic with calls to this helper to improve maintainability and prevent code
duplication.

Expand Down Expand Up @@ -528,7 +535,10 @@ export class OpenAICompatibleProvider extends BaseLLMProvider {
model: modelId,
stream: true,
temperature,
...(modelId.startsWith('o1') || modelId.startsWith('o3') || modelId.startsWith('o4')
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,11 @@ const OPENAI_REASONING_MODELS = [
'o1-mini',
'o1-pro',
'o1-preview',
'o1'
'o1',
'gpt-5',
'gpt-5-mini',
'gpt-5-nano',
'gpt-5-chat'
]
const OPENAI_IMAGE_GENERATION_MODELS = [
'gpt-4o-all',
Expand Down Expand Up @@ -224,6 +228,20 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
stream: false
}

if (modelId.startsWith('gpt-5')) {
const modelConfig = this.configPresenter.getModelConfig(modelId, this.provider.id)
if (modelConfig.reasoningEffort) {
;(requestParams as any).reasoning = {
effort: modelConfig.reasoningEffort
}
}
if (modelConfig.verbosity) {
;(requestParams as any).text = {
verbosity: modelConfig.verbosity
}
}
}

OPENAI_REASONING_MODELS.forEach((noTempId) => {
if (modelId.startsWith(noTempId)) {
delete requestParams.temperature
Expand Down Expand Up @@ -554,6 +572,19 @@ export class OpenAIResponsesProvider extends BaseLLMProvider {
requestParams.tools = apiTools
}

if (modelId.startsWith('gpt-5')) {
if (modelConfig.reasoningEffort) {
;(requestParams as any).reasoning = {
effort: modelConfig.reasoningEffort
}
}
if (modelConfig.verbosity) {
;(requestParams as any).text = {
verbosity: modelConfig.verbosity
}
}
}

OPENAI_REASONING_MODELS.forEach((noTempId) => {
if (modelId.startsWith(noTempId)) delete requestParams.temperature
})
Expand Down
2 changes: 1 addition & 1 deletion src/main/presenter/threadPresenter/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -745,7 +745,7 @@ export class ThreadPresenter implements IThreadPresenter {
if (defaultModelsSettings) {
mergedSettings.maxTokens = defaultModelsSettings.maxTokens
mergedSettings.contextLength = defaultModelsSettings.contextLength
mergedSettings.temperature = defaultModelsSettings.temperature
mergedSettings.temperature = defaultModelsSettings.temperature ?? 0.7
// 重置 thinkingBudget 为模型默认配置,如果模型配置中没有则设为 undefined
mergedSettings.thinkingBudget = defaultModelsSettings.thinkingBudget
}
Expand Down
9 changes: 7 additions & 2 deletions src/renderer/src/components/ChatConfig.vue
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,11 @@ const showThinkingBudget = computed(() => {
return isGemini && isGemini25
})

const isGPT5Model = computed(() => {
const modelId = props.modelId?.toLowerCase() || ''
return modelId.startsWith('gpt-5')
})

// 当前显示的思考预算值
const displayThinkingBudget = computed({
get: () => {
Expand Down Expand Up @@ -126,8 +131,8 @@ const handleDynamicThinkingToggle = (enabled: boolean) => {
/>
</div>

<!-- Temperature -->
<div class="space-y-4 px-2">
<!-- Temperature (GPT-5 系列模型不显示) -->
<div v-if="!isGPT5Model" class="space-y-4 px-2">
<div class="flex items-center justify-between">
Comment on lines +134 to 136
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

UI hides temperature but component still emits it

The slider is hidden for GPT-5, yet temperatureValue is still declared/updated and parent components (e.g. NewThread.vue) forward that value to the backend.
Either:

  1. Stop emitting update:temperature when isGPT5Model is true, or
  2. Emit undefined and let upstream strip the field.

Otherwise requests will still carry an unsupported temperature parameter.

🤖 Prompt for AI Agents
In src/renderer/src/components/ChatConfig.vue around lines 134 to 136, the
temperature slider is hidden for GPT-5 models but the component still emits the
temperature value, causing unsupported parameters to be sent upstream. Modify
the code to either stop emitting the 'update:temperature' event when
'isGPT5Model' is true or emit 'undefined' instead, so the parent components do
not forward an invalid temperature parameter.

<div class="flex items-center space-x-2">
<Icon icon="lucide:thermometer" class="w-4 h-4 text-muted-foreground" />
Expand Down
4 changes: 3 additions & 1 deletion src/renderer/src/components/NewThread.vue
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@
v-model:artifacts="artifacts"
:context-length-limit="contextLengthLimit"
:max-tokens-limit="maxTokensLimit"
:model-id="activeModel?.id"
:provider-id="activeModel?.providerId"
/>
Comment on lines +90 to 92
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Passing model identity 👍 — but remember to drop GPT-5-only banned params

ChatConfig now receives modelId/providerId – good.
However, handleSend still unconditionally includes temperature and maxTokens, both deprecated for GPT-5 (spec requires maxCompletionTokens and forbids temperature). Make the payload conditional:

-  const threadId = await chatStore.createThread(content.text, {
-    providerId: activeModel.value.providerId,
-    modelId: activeModel.value.id,
-    systemPrompt: systemPrompt.value,
-    temperature: temperature.value,
-    contextLength: contextLength.value,
-    maxTokens: maxTokens.value,
+  const isGPT5 = activeModel.value.id?.toLowerCase().startsWith('gpt-5')
+  const threadId = await chatStore.createThread(content.text, {
+    providerId: activeModel.value.providerId,
+    modelId: activeModel.value.id,
+    systemPrompt: systemPrompt.value,
+    ...(isGPT5 ? {} : { temperature: temperature.value }),
+    contextLength: contextLength.value,
+    ...(isGPT5
+        ? { maxCompletionTokens: maxTokens.value }
+        : { maxTokens: maxTokens.value }),

Failing to do this will cause GPT-5 calls to 400/422.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/renderer/src/components/NewThread.vue around lines 90 to 92, the
handleSend function still sends deprecated parameters temperature and maxTokens
for GPT-5 models, which causes errors. Update handleSend to conditionally
include temperature and maxTokens only for non-GPT-5 models, and use
maxCompletionTokens instead for GPT-5. Ensure the payload respects the GPT-5
spec by excluding temperature and replacing maxTokens with maxCompletionTokens
when the active model is GPT-5.

</PopoverContent>
</Popover>
Expand Down Expand Up @@ -163,7 +165,7 @@ watch(
activeModel.value.id,
activeModel.value.providerId
)
temperature.value = config.temperature
temperature.value = config.temperature ?? 0.7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Defaulting temperature for GPT-5 defeats its removal

temperature.value = config.temperature ?? 0.7 reinstates a value even when the model spec omits it.
Guard it:

if (config.temperature !== undefined) {
  temperature.value = config.temperature
} else if (!activeModel.value.id.startsWith('gpt-5')) {
  temperature.value = 0.7
}

This prevents accidentally re-adding temperature to GPT-5 requests.

🤖 Prompt for AI Agents
In src/renderer/src/components/NewThread.vue at line 168, the assignment of
temperature.value uses a default of 0.7 even when the model is GPT-5, which
should not have a temperature set. Modify the code to only assign
temperature.value from config.temperature if it is defined; otherwise, assign
0.7 only if the active model's id does not start with 'gpt-5'. This prevents
setting a default temperature for GPT-5 models.

contextLength.value = config.contextLength
maxTokens.value = config.maxTokens
contextLengthLimit.value = config.contextLength
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,15 @@
)
}}
</DialogTitle>
<DialogDescription>
{{
t(
checkResult
? 'settings.provider.dialog.verify.successDesc'
: 'settings.provider.dialog.verify.failedDesc'
)
}}
</DialogDescription>
</DialogHeader>
<DialogFooter>
<Button variant="outline" @click="showCheckModelDialog = false">
Expand All @@ -257,11 +266,11 @@
>
<DialogHeader>
<DialogTitle>{{ t('settings.provider.inputOAuthCode') }}</DialogTitle>
<DialogDescription>
{{ t('settings.provider.oauthCodeHint') }}
</DialogDescription>
</DialogHeader>
<div class="space-y-4">
<div class="text-sm text-muted-foreground">
{{ t('settings.provider.oauthCodeHint') }}
</div>
<Input
v-model="oauthCode"
:placeholder="t('settings.provider.oauthCodePlaceholder')"
Expand Down Expand Up @@ -310,6 +319,7 @@ import { Icon } from '@iconify/vue'
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle
Expand Down
Loading