-
Notifications
You must be signed in to change notification settings - Fork 625
fix: align DashScope thinking models support between frontend and backend #883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -544,11 +544,7 @@ const loadConfig = async () => { | |
| } | ||
| } | ||
|
|
||
| if ( | ||
| props.providerId === 'dashscope' && | ||
| props.modelId.includes('qwen3') && | ||
| config.value.thinkingBudget === undefined | ||
| ) { | ||
| if (props.providerId === 'dashscope' && config.value.thinkingBudget === undefined) { | ||
| const thinkingConfig = getThinkingBudgetConfig(props.modelId) | ||
| if (thinkingConfig) { | ||
| config.value.thinkingBudget = thinkingConfig.defaultValue | ||
|
|
@@ -707,6 +703,19 @@ const getThinkingBudgetConfig = (modelId: string) => { | |
| } | ||
| } | ||
|
|
||
| if ( | ||
| modelId.includes('qwen-plus') || | ||
| modelId.includes('qwen-turbo') || | ||
| modelId.includes('qwen-flash') | ||
| ) { | ||
| return { | ||
| min: 0, | ||
| max: 500000, | ||
| defaultValue: 500000, | ||
| canDisable: true | ||
| } | ||
| } | ||
|
|
||
| return null // 不支持的模型 | ||
| } | ||
|
|
||
|
|
@@ -737,26 +746,48 @@ const showGeminiThinkingBudget = computed(() => { | |
| const showQwen3ThinkingBudget = computed(() => { | ||
| const isDashscope = props.providerId === 'dashscope' | ||
| const hasReasoning = config.value.reasoning | ||
| const isQwen3Model = props.modelId.includes('qwen3') | ||
| const modelId = props.modelId.toLowerCase() | ||
| // DashScope - ENABLE_THINKING_MODELS | ||
| const supportedThinkingModels = [ | ||
| // Open source versions | ||
| 'qwen3-235b-a22b', | ||
| 'qwen3-32b', | ||
| 'qwen3-30b-a3b', | ||
| 'qwen3-14b', | ||
| 'qwen3-8b', | ||
| 'qwen3-4b', | ||
| 'qwen3-1.7b', | ||
| 'qwen3-0.6b', | ||
| // Commercial versions | ||
| 'qwen-plus', | ||
| 'qwen-flash', | ||
| 'qwen-turbo' | ||
| ] | ||
| const isSupported = supportedThinkingModels.some((supportedModel) => | ||
| modelId.includes(supportedModel) | ||
| ) | ||
| const modelConfig = getThinkingBudgetConfig(props.modelId) | ||
| const isSupported = modelConfig !== null | ||
| return isDashscope && hasReasoning && isQwen3Model && isSupported | ||
| const hasValidConfig = modelConfig !== null | ||
| return isDashscope && hasReasoning && isSupported && hasValidConfig | ||
| }) | ||
|
|
||
| // 是否显示DashScope搜索配置 | ||
| const showDashScopeSearch = computed(() => { | ||
| const isDashscope = props.providerId === 'dashscope' | ||
| const modelId = props.modelId.toLowerCase() | ||
| // DashScope 支持搜索的模型列表 | ||
| const supportedModels = [ | ||
| // DashScope - ENABLE_SEARCH_MODELS | ||
| const supportedSearchModels = [ | ||
| 'qwen3-max-preview', | ||
| 'qwen3-max', | ||
| 'qwen-max', | ||
| 'qwen-plus', | ||
| 'qwen-flash', | ||
| 'qwen-turbo', | ||
| 'qwq-plus' | ||
| ] | ||
| const isSupported = supportedModels.some((supportedModel) => modelId.includes(supportedModel)) | ||
| const isSupported = supportedSearchModels.some((supportedModel) => | ||
| modelId.includes(supportedModel) | ||
| ) | ||
| return isDashscope && isSupported | ||
| }) | ||
|
Comment on lines
+779
to
792
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainFE/BE mismatch: qwen3-max listed here but not in provider. Same alignment issue as ChatConfig.vue; prefer adding qwen3-max to the provider allowlist. Run to confirm: 🏁 Script executed: #!/bin/bash
rg -n "qwen3-max" src/main src/renderer | sed 's/^/>> /'Length of output: 624 Add ‘qwen3-max’ to backend provider allowlists 🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Qwen thinking-budget range is inconsistent with ModelConfigDialog (caps too low for qwen-plus/turbo/flash).
Here you gate Qwen commercial models, but getQwen3MaxBudget() below only returns OS caps; UI cap becomes 81,920 vs 500,000 in ModelConfigDialog. Unify ranges.
Apply this diff to extend the cap used by the chat config:
const getQwen3MaxBudget = (): number => { const modelId = props.modelId?.toLowerCase() || '' // 根据不同的 Qwen3 模型返回不同的最大值 if (modelId.includes('qwen3-235b-a22b') || modelId.includes('qwen3-30b-a3b')) { return 81920 } else if ( modelId.includes('qwen3-32b') || modelId.includes('qwen3-14b') || modelId.includes('qwen3-8b') || modelId.includes('qwen3-4b') ) { return 38912 } else if (modelId.includes('qwen3-1.7b') || modelId.includes('qwen3-0.6b')) { return 20000 + } else if ( + modelId.includes('qwen-plus') || + modelId.includes('qwen-turbo') || + modelId.includes('qwen-flash') + ) { + // Align with ModelConfigDialog thinking budget for commercial Qwen models + return 500000 } // 默认值 return 81920 }📝 Committable suggestion