Skip to content

Conversation

@neoragex2002
Copy link
Contributor

@neoragex2002 neoragex2002 commented Aug 28, 2025

fix: update Gemini 2.5 Flash Lite (GA) model config

Summary by CodeRabbit

  • New Features

    • TokenFlux provider support and TokenFlux icon added.
    • Added Gemini 2.5 family models including Flash Image Preview and Flash Lite variants.
  • Improvements

    • Updated Gemini model token/context limits and capability flags for image-preview and 2.5-era models.
    • Streaming now surfaces reasoning/thought content more quickly for supported models.
    • Markdown renderer: fixed line-wrapping in links and improved code-block language fallback.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 28, 2025

Walkthrough

Adds TokenFlux provider integration and tokenflux icon; introduces multiple Gemini 2.5 variants (flash-image-preview, flash-lite) and updates token/context limits and capabilities; refactors Gemini streaming to detect a new thought format and reworks legacy buffering; minor renderer and catch-clause tweaks.

Changes

Cohort / File(s) Summary
Config: Default Model Settings
src/main/presenter/configPresenter/modelDefaultSettings.ts
Added Gemini 2.5 entries (google/gemini-2.5-flash-image-preview, models/gemini-2.5-flash, models/gemini-2.5-flash-lite, gemini-2.5-pro), adjusted token limits (several 65536→65535 and 8192→8191), set vision/functionCall/reasoning/thinkingBudget flags, and introduced ImageGeneration-type preview entry.
Config: Provider Model Settings
src/main/presenter/configPresenter/providerModelSettings.ts
Registered the new Gemini 2.5 models (image-preview, flash-lite variants), harmonized IDs/match arrays, adjusted maxTokens/contextLength across Gemini variants (including 1048576 contexts), and added image-preview entry for OpenRouter. Data-only configuration changes.
Provider: Gemini streaming & model metadata
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
Updated GEMINI_MODELS metadata (token/context adjustments, added flash-lite), tightened isFunctionCallSupported/isReasoningSupported to exclude flash-image-preview variants, added detection/handling for a new thought format (part.thought) with incremental thought emission, and reworked legacy <think> buffering/flush to avoid duplication.
Provider: TokenFlux integration
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts, src/main/presenter/llmProviderPresenter/index.ts, src/main/presenter/configPresenter/providers.ts
New TokenFluxProvider (extends OpenAICompatibleProvider) with completions/summaries/generateText, key validation (getKeyStatus/check), and TokenFlux-specific model discovery (fetchOpenAIModels); added tokenflux provider to DEFAULT_PROVIDERS and wired provider factory to instantiate TokenFluxProvider.
Renderer: Icons & Markdown
src/renderer/src/components/icons/ModelIcon.vue, src/renderer/src/components/markdown/MarkdownRenderer.vue
Added tokenflux icon mapping; code-block language lookup extended to v.language ?? v.node?.lang ?? v.node?.language ?? 'plaintext'; added CSS rule to force .markdown-renderer inline inside anchors and minor formatting changes.
Thread Presenter: Error handling
src/main/presenter/threadPresenter/contentEnricher.ts
Replaced catch (error) with bindingless catch in extractMainContent.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User
  participant App as App
  participant GP as GeminiProvider
  participant GA as Gemini API (Stream)

  U->>App: send prompt (model: gemini-2.5*)
  App->>GP: generateStream(request)
  GP->>GA: open stream
  GA-->>GP: parts (text / thought flag / tool events / <think> tags)

  alt new thought format (part.thought == true)
    note right of GP #e6f7ff: incremental reasoning stream
    GP-->>App: emitReasoning(thought chunk)
    GP-->>App: emitText(text chunks)
  else legacy <think> format
    note right of GP #fff7e6: buffer until <think> boundaries
    GP->>GP: accumulate buffer, detect <think> start/end
    GP-->>App: emitReasoning(think block)
    GP-->>App: emitText(flush buffered text)
  end

  GA-->>GP: done
  GP-->>App: complete()
Loading
sequenceDiagram
  autonumber
  participant App as App
  participant TF as TokenFluxProvider
  participant TFAPI as TokenFlux API

  App->>TF: fetchOpenAIModels()
  TF->>TFAPI: GET /models (with API key)
  TFAPI-->>TF: model list / metadata
  TF->>App: update configPresenter (MODEL_META entries)
  alt on error
    TF->>App: fallback -> super.fetchOpenAIModels()
  end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

Hop hop, a new provider arrives tonight,
TokenFlux glows and Gemini dreams take flight.
Thoughts stream in new tiny streams of light,
I nibble bytes and celebrate the byte—
A rabbit cheers, commit secured just right 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbit in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbit in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbit gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbit read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbit help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbit ignore or @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbit summary or @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbit or @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)

374-385: Unify model name: “Flash Lite” (no hyphen) for consistency

Current: "Gemini 2.5 Flash-Lite". Elsewhere it’s "Gemini 2.5 Flash Lite". Standardize to avoid duplicate entries/UI mismatches.

-    name: 'Gemini 2.5 Flash-Lite',
+    name: 'Gemini 2.5 Flash Lite',
src/main/presenter/configPresenter/providerModelSettings.ts (1)

218-227: GA Flash Lite entry — fields OK; name consistency

Name here is "Gemini 2.5 Flash Lite". Please also update the name in defaults and GeminiProvider static list to match (drop hyphen).

src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)

78-89: Rename to “Flash Lite” (no hyphen) to match provider/defaults

Keeps UI labeling consistent across sources.

-      name: 'Gemini 2.5 Flash-Lite',
+      name: 'Gemini 2.5 Flash Lite',

1003-1006: Flush buffered legacy text when switching to the new “thought” format mid-stream

If new thought chunks arrive after some text has been buffered (old format path), that buffer won’t flush because final flush is skipped when new format is detected. Flush on transition to prevent content loss.

-          if ((part as any).thought === true && part.text) {
-            isNewThoughtFormatDetected = true
-            thoughtContent += part.text
+          if ((part as any).thought === true && part.text) {
+            // Transition: flush any pending buffer from old path before switching
+            if (!isNewThoughtFormatDetected && buffer) {
+              if (isInThinkTag) {
+                yield { type: 'reasoning', reasoning_content: buffer }
+              } else {
+                yield { type: 'text', content: buffer }
+              }
+              buffer = ''
+              isInThinkTag = false
+            }
+            isNewThoughtFormatDetected = true
+            thoughtContent += part.text
           } else if (part.text) {
             content += part.text
           }

Also applies to: 1034-1071, 1086-1086

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 76eb459 and 703136b.

📒 Files selected for processing (3)
  • src/main/presenter/configPresenter/modelDefaultSettings.ts (2 hunks)
  • src/main/presenter/configPresenter/providerModelSettings.ts (3 hunks)
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (6 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Enable and adhere to strict TypeScript type checking

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain one presenter per functional domain in src/main/presenter/

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Centralize configuration logic under configPresenter/

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
🧠 Learnings (2)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add new provider configuration entries in configPresenter/providers.ts

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🔇 Additional comments (4)
src/main/presenter/configPresenter/modelDefaultSettings.ts (2)

351-361: Add GA image-preview entry — looks correct

Vision-only, functionCall/reasoning disabled, and token/context caps line up with provider config.


390-390: Confirm preview caps vs provider overrides

This sets preview maxTokens to 65535 (and contextLength 1048576). Provider overrides still carry 64000/1000000; align there to prevent conflicting quotas.

src/main/presenter/configPresenter/providerModelSettings.ts (2)

184-194: Add image-preview (vision-only) — OK

Matches new default entry; functionCall/reasoning disabled as expected.


2339-2349: OpenRouter image-preview entry — OK

Config mirrors gemini provider; vision-only and no function/tool call.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
src/main/presenter/configPresenter/providerModelSettings.ts (4)

196-206: Preview limits synced with defaults — LGTM; unify naming

Good fix to 65535/1048576. For consistency with the GA entry, rename to “Flash Lite Preview” (drop the hyphen).

-        name: 'Gemini 2.5 Flash-Lite Preview',
+        name: 'Gemini 2.5 Flash Lite Preview',

219-225: Guard against substring match collision with preview model

Because getProviderSpecificModelConfig uses includes() matching, 'gemini-2.5-flash-lite' will also match the preview ID ('...flash-lite-preview-06-17') if ordering changes. Either keep preview before GA permanently, or switch to exact matching in getProviderSpecificModelConfig.

Apply (outside this hunk) to prefer exact matches:

-    if (config.match.some((matchStr) => lowerModelId.includes(matchStr.toLowerCase()))) {
+    if (config.match.some((matchStr) => lowerModelId === matchStr.toLowerCase())) {

If you want to keep includes(), add a code comment noting order-dependence right above both entries.


4-20: Comments should be in English per repo guideline

Repo guideline: “Use English for all logs and comments” in src/**/*.{ts,tsx,vue}. Consider translating the Chinese comments in this header and provider sections in a follow-up PR.


184-194: Optional: align ID/match with “models/…” convention
Provider and default settings already share the same ID/match (so budgets stay in sync). If you’d like to follow the “models/…” naming convention, update both occurrences of

  • id: 'google/gemini-2.5-flash-image-preview'
  • match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview']
    to include 'models/gemini-2.5-flash-image-preview' in:
  • src/main/presenter/configPresenter/providerModelSettings.ts (lines ~185–190)
  • src/main/presenter/configPresenter/modelDefaultSettings.ts (lines ~350–357)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 703136b and 0c402c1.

📒 Files selected for processing (1)
  • src/main/presenter/configPresenter/providerModelSettings.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Enable and adhere to strict TypeScript type checking

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain one presenter per functional domain in src/main/presenter/

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Centralize configuration logic under configPresenter/

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🧠 Learnings (2)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add new provider configuration entries in configPresenter/providers.ts

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🪛 Biome (2.1.2)
src/main/presenter/configPresenter/providerModelSettings.ts

[error] 224-224: expected , but instead found [

Remove [

(parse)

🔇 Additional comments (2)
src/main/presenter/configPresenter/providerModelSettings.ts (2)

219-225: Drop the parse‐error warning—this snippet is syntactically valid. Verified that the match: [...] array at lines 223–226 has correct commas and no stray characters; TypeScript/Biome parses it without error.

Likely an incorrect or invalid review comment.


2340-2350: OpenRouter’s Gemini Flash Image Preview entry matches the Gemini provider and defaults
The openrouter entry for google/gemini-2.5-flash-image-preview has identical temperature, maxTokens, contextLength, vision, functionCall, and reasoning flags to both the Gemini provider’s entry and the model default settings.

zerob13 and others added 2 commits August 28, 2025 14:17
* feat: add TokenFlux.ai provider support

Add comprehensive TokenFlux.ai integration with OpenAI-compatible API support:
- Implement TokenFluxProvider extending OpenAICompatibleProvider
- Add provider configuration with official endpoints and documentation links
- Support dynamic model fetching with TokenFlux-specific format parsing
- Include vision and function calling capability detection from API response
- Add TokenFlux icon integration in ModelIcon component
- Enable seamless UI integration with existing provider settings

Resolves ThinkInAIXYZ#613

* fix: lint
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/main/presenter/threadPresenter/contentEnricher.ts (3)

171-183: Don't swallow errors; also fix the 0-length limit logic (bug).

  • The truthy check prevents configValue === 0 from ever being honored.
  • Bindingless catch hides the error; violates logging guidelines (English + structured logs + levels).

Apply:

-    try {
-      const configValue = presenter.configPresenter.getSetting<number>('webContentLengthLimit')
-      if (configValue && typeof configValue === 'number') {
-        if (configValue === 0) {
-          // 0 表示不限制长度
-          return mainContent
-        } else if (configValue > 0 && configValue <= 50000) {
-          lengthLimit = configValue
-        }
-      }
-    } catch {
-      // 忽略错误,使用默认值
-    }
+    try {
+      const configValue = presenter.configPresenter.getSetting<number | undefined>('webContentLengthLimit')
+      if (typeof configValue === 'number') {
+        if (configValue === 0) {
+          // 0 means no limit
+          return mainContent
+        } else if (configValue > 0 && configValue <= 50000) {
+          lengthLimit = configValue
+        }
+      }
+    } catch (err: unknown) {
+      console.warn(JSON.stringify({
+        ts: new Date().toISOString(),
+        level: 'WARN',
+        code: 'WEB_CONTENT_LENGTH_LIMIT_FALLBACK',
+        message: 'Failed to read webContentLengthLimit; using default lengthLimit=3000.',
+        error: err instanceof Error ? err.message : String(err)
+      }))
+    }

49-56: Proxy not applied to HTTPS requests.

Axios needs httpsAgent for https URLs; currently only httpAgent is set.

       const response = await axios.get(url, {
         timeout,
         headers: {
           'User-Agent':
             'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
         },
-        httpAgent: proxyUrl ? new HttpsProxyAgent(proxyUrl) : undefined
+        httpAgent: proxyUrl ? new HttpsProxyAgent(proxyUrl) : undefined,
+        httpsAgent: proxyUrl ? new HttpsProxyAgent(proxyUrl) : undefined
       })

85-86: Use English and structured logs for errors (per repo guidelines).

Convert Chinese log messages to English and structure them.

-      console.error(`提取URL内容失败 ${url}:`, error instanceof Error ? error.message : '')
+      console.error(JSON.stringify({
+        ts: new Date().toISOString(),
+        level: 'ERROR',
+        code: 'URL_EXTRACT_FAILED',
+        url,
+        message: error instanceof Error ? error.message : String(error)
+      }))
-      } catch (error) {
-        // 如果URL构建失败,使用原始href
-        console.error('构建URL失败:', error)
-      }
+      } catch (error) {
+        // If URL build fails, use original href
+        console.warn(JSON.stringify({
+          ts: new Date().toISOString(),
+          level: 'WARN',
+          code: 'URL_BUILD_FAILED',
+          baseUrl,
+          href,
+          message: error instanceof Error ? error.message : String(error)
+        }))
+      }

Also applies to: 281-284

🧹 Nitpick comments (2)
src/main/presenter/threadPresenter/contentEnricher.ts (1)

153-158: Remove redundant trim().

Minor cleanup; no behavior change.

     mainContent = mainContent
       .replace(/[\r\n]+/g, ' ')
       .replace(/\s+/g, ' ')
-      .trim()
-      .trim()
+      .trim()
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1)

137-146: Harden model parsing; guard for unexpected shapes.
Add basic shape checks before iterating and default safely when fields are missing.

-      const tokenfluxResponse = response as unknown as TokenFluxModelsResponse
-
-      for (const model of tokenfluxResponse.data) {
+      const list = (response as any)?.data ?? (response as any)?.data?.data
+      if (!Array.isArray(list)) {
+        console.warn('TokenFlux models.list returned unexpected shape; falling back.')
+        return super.fetchOpenAIModels(options)
+      }
+      for (const model of list as any[]) {
         // Extract model information
-        const modelId = model.id
-        const modelName = model.name || modelId
-        const description = model.description || ''
+        const modelId = String(model.id)
+        const modelName = (model.name as string) || modelId
+        const description = (model.description as string) || ''
@@
-        const hasVision = model.supports_vision || false
+        const hasVision = Boolean(model.supports_vision)
@@
-        const contextLength = model.context_length || existingConfig.contextLength || 4096
+        const contextLength =
+          Number(model.context_length) > 0 ? Number(model.context_length) : existingConfig.contextLength || 4096

Also applies to: 147-156, 175-186, 187-205

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0c402c1 and 860aec4.

⛔ Files ignored due to path filters (1)
  • src/renderer/src/assets/llm-icons/tokenflux-color.svg is excluded by !**/*.svg
📒 Files selected for processing (6)
  • src/main/presenter/configPresenter/providers.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/index.ts (2 hunks)
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1 hunks)
  • src/main/presenter/threadPresenter/contentEnricher.ts (1 hunks)
  • src/renderer/src/components/icons/ModelIcon.vue (2 hunks)
  • src/renderer/src/components/markdown/MarkdownRenderer.vue (4 hunks)
🧰 Additional context used
📓 Path-based instructions (20)
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between components

src/renderer/src/**/*.vue: Follow existing component patterns when creating new UI components
Ensure responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states to UI components

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

src/renderer/**/*.{ts,vue}: Use Pinia for frontend state management
From renderer to main, call presenters via the usePresenter.ts composable

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/**/*.vue: Use Vue 3 Composition API for all components
Use Tailwind CSS with scoped styles for component styling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/src/**

📄 CodeRabbit inference engine (CLAUDE.md)

Organize UI components by feature under src/renderer/src/

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Enable and adhere to strict TypeScript type checking

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Agent Loop layer must manage conversation flow, execute tools via McpPresenter, and standardize events to the frontend

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain one presenter per functional domain in src/main/presenter/

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
  • src/main/presenter/threadPresenter/contentEnricher.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Centralize configuration logic under configPresenter/

Files:

  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/configPresenter/providers.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Add new provider configuration entries in configPresenter/providers.ts

Files:

  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
🧠 Learnings (20)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Iconify/Vue for icon implementation.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Import Icon component from Iconify/Vue.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use the Icon component with lucide icons.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : For cases where lucide icons don't meet requirements, use other Iconify collections as fallback.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Prefer "lucide:" icon family as the primary choice.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Follow naming pattern: {collection}:{icon-name} for icons.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-08-28T05:55:31.458Z
Learnt from: zerob13
PR: ThinkInAIXYZ/deepchat#804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.458Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When adding a new LLM provider, create a provider file under providers/

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement provider-specific API interactions, convert MCP tools, and normalize streaming responses

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add new provider configuration entries in configPresenter/providers.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement a coreStream method for new providers following the standardized event interface

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should aggregate and yield usage events as part of the standardized stream.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use template syntax for declarative rendering.

Applied to files:

  • src/renderer/src/components/markdown/MarkdownRenderer.vue
📚 Learning: 2025-07-21T01:47:11.608Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/project-structure.mdc:0-0
Timestamp: 2025-07-21T01:47:11.608Z
Learning: Applies to src/renderer/**/*.{vue,ts,js,tsx,jsx} : 渲染进程代码放在 `src/renderer`

Applied to files:

  • src/renderer/src/components/markdown/MarkdownRenderer.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use <script setup> syntax for concise component definitions.

Applied to files:

  • src/renderer/src/components/markdown/MarkdownRenderer.vue
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1)
  • TokenFluxProvider (26-213)
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (3)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
  • OpenAICompatibleProvider (68-1464)
src/shared/presenter.d.ts (5)
  • LLM_PROVIDER (495-515)
  • ChatMessage (1272-1284)
  • LLMResponse (842-859)
  • KeyStatus (1361-1367)
  • MODEL_META (481-494)
src/main/presenter/configPresenter/index.ts (1)
  • ConfigPresenter (77-1208)
🔇 Additional comments (8)
src/renderer/src/components/icons/ModelIcon.vue (2)

43-43: TokenFlux icon import looks correct.
Import path and suffix are consistent with existing icons.


115-115: Icon mapping added — verify dark theme invert.
If the tokenflux icon is dark-on-dark in dark mode, consider adding it to the invert list; otherwise keep as-is.

Do you see low contrast for TokenFlux on dark theme? If yes, add:

-    props.modelId.toLowerCase().includes('aws-bedrock')
+    props.modelId.toLowerCase().includes('aws-bedrock') ||
+    props.modelId.toLowerCase().includes('tokenflux')
src/renderer/src/components/markdown/MarkdownRenderer.vue (1)

71-78: Translate comment to English per repo guidelines (logs/comments in English).
[Suggest_nitpick]
Apply:

-  /*
-    精准定位到那个被错误地渲染在 <a> 标签内部的 <div>,
-    并强制其以行内方式显示,从而修正换行 bug。
-    这可以保留链接组件原有的所有样式(包括颜色)。
-  */
+  /* Fix wrapping when <div class="markdown-renderer"> renders inside <a>:
+     force inline display to preserve link styles (including color). */
src/main/presenter/llmProviderPresenter/index.ts (2)

22-22: Import of TokenFluxProvider — OK.
No side effects introduced.


170-172: Provider factory branch for tokenflux — OK.
Early return by id avoids falling back to generic OpenAI handler.

src/main/presenter/configPresenter/providers.ts (1)

96-111: TokenFlux provider entry added — confirm endpoints.
Base URL and dashboard/docs links look plausible; please verify the exact v1 endpoint.

Would you confirm TokenFlux’s current API base is https://tokenflux.ai/v1 and models path is /models?

src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (2)

153-156: Function calling default = true — validate assumption.
If any TokenFlux model lacks native FC, downstream tool flow may break.

Please confirm all TokenFlux chat models used here support function calling; otherwise set per-model flags.


31-38: Delegation to openAICompletion — OK.
Matches project conventions for OpenAI-compatible providers.

Also applies to: 59-76

Comment on lines +82 to +109
public async getKeyStatus(): Promise<KeyStatus> {
if (!this.provider.apiKey) {
throw new Error('API key is required')
}

// TokenFlux uses OpenAI-compatible API, so we can use the models endpoint for key validation
const response = await fetch(`${this.provider.baseUrl}/models`, {
method: 'GET',
headers: {
Authorization: `Bearer ${this.provider.apiKey}`,
'Content-Type': 'application/json'
}
})

if (!response.ok) {
const errorText = await response.text()
throw new Error(
`TokenFlux API key check failed: ${response.status} ${response.statusText} - ${errorText}`
)
}

// TokenFlux doesn't provide quota information in the models endpoint response
// So we return a simple success status
return {
limit_remaining: 'Available',
remainNum: undefined
}
}
Copy link
Contributor

@coderabbitai coderabbitai bot Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use OpenAI client for key check to honor proxy/timeout.
Direct fetch bypasses proxy/timeouts configured in OpenAI client; reuse this.openai.models.list.

-    // TokenFlux uses OpenAI-compatible API, so we can use the models endpoint for key validation
-    const response = await fetch(`${this.provider.baseUrl}/models`, {
-      method: 'GET',
-      headers: {
-        Authorization: `Bearer ${this.provider.apiKey}`,
-        'Content-Type': 'application/json'
-      }
-    })
-
-    if (!response.ok) {
-      const errorText = await response.text()
-      throw new Error(
-        `TokenFlux API key check failed: ${response.status} ${response.statusText} - ${errorText}`
-      )
-    }
+    // Use OpenAI client to respect proxy and standardize errors
+    await this.openai.models.list({ timeout: 15_000 })
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts around
lines 82 to 109, the current key-check logic uses a direct fetch call which
bypasses the configured OpenAI client (and thus proxy/timeouts); replace the
fetch call with a call to this.openai.models.list() and await it, catch any
errors thrown by the OpenAI client and rethrow a descriptive Error containing
the status/message/body as available, and keep returning the same KeyStatus
object ({ limit_remaining: 'Available', remainNum: undefined }) on success so
the check honors the client's proxy/timeout configuration.

Copy link
Contributor Author

@neoragex2002 neoragex2002 Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left for upsteam to resolve

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neoragex2002 Understood, leaving the fetch approach as-is for now.


🧠 Learnings used
Learnt from: zerob13
PR: ThinkInAIXYZ/deepchat#804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.458Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement provider-specific API interactions, convert MCP tools, and normalize streaming responses

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : Agent Loop layer must manage conversation flow, execute tools via McpPresenter, and standardize events to the frontend

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should aggregate and yield usage events as part of the standardized stream.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.

Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.

neoragex2002 and others added 2 commits August 28, 2025 14:25
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (3)

91-101: Remove duplicate static entry (gemini-2.5-flash-lite)

GEMINI_MODELS includes the same id twice; leads to duplicates downstream.

-    {
-      id: 'models/gemini-2.5-flash-lite',
-      name: 'Gemini 2.5 Flash-Lite',
-      group: 'default',
-      providerId: 'gemini',
-      isCustom: false,
-      contextLength: 1048576,
-      maxTokens: 65535,
-      vision: true,
-      functionCall: true,
-      reasoning: true
-    },

893-899: Detect image-generation by type, not hardcoded id

This misses new 2.5 image-preview. Use modelConfig.type so both 2.0 and 2.5 routes work.

-    const isImageGenerationModel = modelId === 'models/gemini-2.0-flash-preview-image-generation'
+    const isImageGenerationModel = modelConfig.type === ModelType.ImageGeneration

386-393: Enum comparison bug: thresholds compared to strings

threshold is an enum value; string compares never match, so BLOCK_NONE/UNSPECIFIED aren’t filtered.

-        if (
-          threshold &&
-          category &&
-          threshold !== 'BLOCK_NONE' &&
-          threshold !== 'HARM_BLOCK_THRESHOLD_UNSPECIFIED'
-        ) {
+        if (
+          threshold !== undefined &&
+          category &&
+          threshold !== HarmBlockThreshold.BLOCK_NONE &&
+          threshold !== HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED
+        ) {
           safetySettings.push({ category, threshold })
         }
♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (1)

933-944: Guard tool payloads by model function-call support

Avoid sending tools to models that disable function calling (e.g., flash-image-preview).

-    if (geminiTools && geminiTools.length > 0) {
+    if (geminiTools && geminiTools.length > 0 && modelConfig.functionCall) {
       requestParams.config = {
         ...requestParams.config,
         tools: geminiTools,
         toolConfig: {
           functionCallingConfig: {
             mode: FunctionCallingConfigMode.AUTO // 允许模型自动决定是否调用工具
           }
         }
       }
     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 860aec4 and c09ab1f.

📒 Files selected for processing (4)
  • src/main/presenter/configPresenter/modelDefaultSettings.ts (3 hunks)
  • src/main/presenter/configPresenter/providerModelSettings.ts (6 hunks)
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (10 hunks)
  • src/renderer/src/components/markdown/MarkdownRenderer.vue (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/renderer/src/components/markdown/MarkdownRenderer.vue
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Enable and adhere to strict TypeScript type checking

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain one presenter per functional domain in src/main/presenter/

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Centralize configuration logic under configPresenter/

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/configPresenter/modelDefaultSettings.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : Agent Loop layer must manage conversation flow, execute tools via McpPresenter, and standardize events to the frontend

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
📚 Learning: 2025-08-26T14:13:46.601Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-26T14:13:46.601Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add new provider configuration entries in configPresenter/providers.ts

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts

Comment on lines +365 to +387
id: 'models/gemini-2.5-flash-lite',
name: 'Gemini 2.5 Flash-Lite',
temperature: 0.7,
maxTokens: 65535,
contextLength: 1048576,
match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
vision: true,
functionCall: true,
reasoning: true,
thinkingBudget: -1 // 动态思维
},
{
id: 'models/gemini-2.5-flash-lite',
name: 'Gemini 2.5 Flash-Lite',
temperature: 0.7,
maxTokens: 65535,
contextLength: 1048576,
match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
vision: true,
functionCall: true,
reasoning: true,
thinkingBudget: -1 // 动态思维
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Duplicate model entry: models/gemini-2.5-flash-lite appears twice

The second block is an exact duplicate and will cause UI duplication and config ambiguity.

   {
     id: 'models/gemini-2.5-flash-lite',
     name: 'Gemini 2.5 Flash-Lite',
     temperature: 0.7,
     maxTokens: 65535,
     contextLength: 1048576,
     match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
     vision: true,
     functionCall: true,
     reasoning: true,
     thinkingBudget: -1 // 动态思维
   },
-  {
-    id: 'models/gemini-2.5-flash-lite',
-    name: 'Gemini 2.5 Flash-Lite',
-    temperature: 0.7,
-    maxTokens: 65535,
-    contextLength: 1048576,
-    match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
-    vision: true,
-    functionCall: true,
-    reasoning: true,
-    thinkingBudget: -1 // 动态思维
-  },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'models/gemini-2.5-flash-lite',
name: 'Gemini 2.5 Flash-Lite',
temperature: 0.7,
maxTokens: 65535,
contextLength: 1048576,
match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
vision: true,
functionCall: true,
reasoning: true,
thinkingBudget: -1 // 动态思维
},
{
id: 'models/gemini-2.5-flash-lite',
name: 'Gemini 2.5 Flash-Lite',
temperature: 0.7,
maxTokens: 65535,
contextLength: 1048576,
match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
vision: true,
functionCall: true,
reasoning: true,
thinkingBudget: -1 // 动态思维
},
{
id: 'models/gemini-2.5-flash-lite',
name: 'Gemini 2.5 Flash-Lite',
temperature: 0.7,
maxTokens: 65535,
contextLength: 1048576,
match: ['models/gemini-2.5-flash-lite', 'gemini-2.5-flash-lite'],
vision: true,
functionCall: true,
reasoning: true,
thinkingBudget: -1 // 动态思维
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/modelDefaultSettings.ts around lines 365
to 387, there is an exact duplicate entry for the model id
'models/gemini-2.5-flash-lite' which will produce UI duplication and ambiguous
configuration; remove the redundant second block (or consolidate if intentional
differences are needed) so each model id appears only once, ensuring the array
contains a single canonical definition for 'models/gemini-2.5-flash-lite'.

Comment on lines +185 to +194
id: 'google/gemini-2.5-flash-image-preview',
name: 'Gemini 2.5 Flash Image Preview',
temperature: 0.7,
maxTokens: 32768,
contextLength: 32768,
match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
vision: true,
functionCall: false,
reasoning: false
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mark flash-image-preview as ImageGeneration

Without type: ModelType.ImageGeneration, downstream won’t set responseModalities and image chunks may be missed.

       {
         id: 'google/gemini-2.5-flash-image-preview',
         name: 'Gemini 2.5 Flash Image Preview',
         temperature: 0.7,
         maxTokens: 32768,
         contextLength: 32768,
         match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
         vision: true,
         functionCall: false,
-        reasoning: false
+        reasoning: false,
+        type: ModelType.ImageGeneration
       },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'google/gemini-2.5-flash-image-preview',
name: 'Gemini 2.5 Flash Image Preview',
temperature: 0.7,
maxTokens: 32768,
contextLength: 32768,
match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
vision: true,
functionCall: false,
reasoning: false
},
{
id: 'google/gemini-2.5-flash-image-preview',
name: 'Gemini 2.5 Flash Image Preview',
temperature: 0.7,
maxTokens: 32768,
contextLength: 32768,
match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
vision: true,
functionCall: false,
reasoning: false,
type: ModelType.ImageGeneration
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 185
to 194, the model entry for 'google/gemini-2.5-flash-image-preview' is missing a
type designation; add type: ModelType.ImageGeneration to that object so
downstream sets responseModalities and correctly handles image chunks (ensure
ModelType is imported/available in this module).

Comment on lines +2339 to +2349
{
id: 'google/gemini-2.5-flash-image-preview',
name: 'Gemini 2.5 Flash Image Preview',
temperature: 0.7,
maxTokens: 32768,
contextLength: 32768,
match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
vision: true,
functionCall: false,
reasoning: false
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

OpenRouter mirror: set ImageGeneration type as well

Keep provider parity; add type: ModelType.ImageGeneration.

       {
         id: 'google/gemini-2.5-flash-image-preview',
         name: 'Gemini 2.5 Flash Image Preview',
         temperature: 0.7,
         maxTokens: 32768,
         contextLength: 32768,
         match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
         vision: true,
         functionCall: false,
-        reasoning: false
+        reasoning: false,
+        type: ModelType.ImageGeneration
       },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
id: 'google/gemini-2.5-flash-image-preview',
name: 'Gemini 2.5 Flash Image Preview',
temperature: 0.7,
maxTokens: 32768,
contextLength: 32768,
match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
vision: true,
functionCall: false,
reasoning: false
},
{
id: 'google/gemini-2.5-flash-image-preview',
name: 'Gemini 2.5 Flash Image Preview',
temperature: 0.7,
maxTokens: 32768,
contextLength: 32768,
match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'],
vision: true,
functionCall: false,
reasoning: false,
type: ModelType.ImageGeneration
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2339
to 2349, the model entry for 'google/gemini-2.5-flash-image-preview' is missing
the type field; add type: ModelType.ImageGeneration to the object to match other
OpenRouter image models. If ModelType is not already imported in this file,
import it from its module (or reference the existing enum location) and ensure
the entry follows the same ordering/format as other provider model objects.

@neoragex2002
Copy link
Contributor Author

The coderabbit's commit history is too messy, I'll resubmit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants