Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Jul 26, 2025

image

支持 gemini 设置是否需要思考
并且修复了用户自定义model config 的bug

Supports Gemini settings for whether thinking is required
fixed the bug in user-defined model configuration

Summary by CodeRabbit

  • New Features

    • Enhanced model configuration handling to indicate whether a configuration is user-defined.
    • Improved extraction and display of reasoning content for Gemini-based models, supporting both new and legacy formats.
  • Bug Fixes

    • Increased reliability and safety in storing and retrieving model configurations, preventing issues with special characters.
  • Chores

    • Markdown files are now excluded from automatic formatting.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 26, 2025

Walkthrough

This update introduces a robust cache key mechanism for model configurations, enhances the merging logic of model attributes with configurations, and significantly improves reasoning content extraction and streaming in the Gemini provider. It also extends the ModelConfig interface with an isUserDefined flag and updates formatting ignore rules.

Changes

File(s) Change Summary
src/main/presenter/configPresenter/modelConfig.ts Implements sanitized cache key generation/parsing, updates config retrieval/setting logic, adds isUserDefined flag to configs.
src/main/presenter/llmProviderPresenter/index.ts Refines logic for merging model attributes with config, differentiates behavior for user-defined vs default configs.
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts Overhauls reasoning extraction: new response processor, improved streaming, explicit reasoning flag propagation, and debug logging.
src/shared/presenter.d.ts Adds optional isUserDefined property to ModelConfig interface.
.prettierignore Adds *.md pattern and newline to ignore Markdown files from Prettier formatting.

Sequence Diagram(s)

Gemini Provider: Reasoning Content Extraction and Streaming

sequenceDiagram
    participant Client
    participant GeminiProvider
    participant GeminiAPI

    Client->>GeminiProvider: coreStream(messages, modelId, modelConfig, ...)
    GeminiProvider->>GeminiAPI: Send streaming request (reasoning flag as per config)
    loop For each streamed chunk
        GeminiAPI-->>GeminiProvider: Streamed chunk (may contain reasoning)
        alt Chunk contains reasoning part
            GeminiProvider->>Client: Yield reasoning event
        end
        GeminiProvider->>Client: Yield normal content event
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Poem

A rabbit hopped through code today,
With keys now safe and tucked away.
Gemini’s thoughts stream clear and bright,
User configs marked in the light.
Markdown files ignored with glee—
Oh, what a hoppy change spree!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9e41c58 and aab8eca.

📒 Files selected for processing (1)
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (13 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (1)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (14)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: The Agent loop logic should be centralized in index.ts to reduce code duplication and improve maintainability.

Learnt from: neoragex2002
PR: #550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both text and content properties in the handleMeetingInstruction function's sendMessage call, even though they are redundant, rather than removing the content property.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.

🧬 Code Graph Analysis (1)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (1)
src/shared/presenter.d.ts (1)
  • LLMResponse (757-774)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (6)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (6)

441-467: LGTM! The reasoning configuration logic has been correctly fixed.

The method signature update to include the reasoning parameter and the conditional logic properly addresses the previous concern about enabling thinking only when reasoning is explicitly true.


619-665: Well-designed method for unified response processing.

The processGeminiResponse method effectively consolidates the reasoning content extraction logic, supporting both the new format (parts flagged with thought: true) and legacy format (<think> tags). This promotes code reuse and maintainability.


258-258: Consistent reasoning parameter handling across synchronous methods.

All synchronous methods appropriately pass false for the reasoning parameter, which is correct since reasoning/thinking should only be enabled in conversational contexts. The methods that return LLMResponse now correctly use the unified processGeminiResponse method.

Also applies to: 808-811, 836-839, 866-866


940-940: Enhanced reasoning content handling in streaming method.

The core streaming method correctly uses modelConfig.reasoning to determine when to enable thinking and properly handles both new format reasoning content (parts with thought: true) and legacy format (<think> tags). The immediate yielding of reasoning events follows the standardized stream event pattern.

Also applies to: 1025-1058


1184-1184: Correct reasoning parameter handling for image generation.

Image generation models appropriately disable reasoning by passing false to getGenerationConfig, which is the correct behavior since reasoning/thinking is not applicable for image generation tasks.


1-1240: Overall implementation follows coding guidelines well.

The GeminiProvider implementation correctly follows the established patterns:

  • Uses English for logs and comments ✅
  • Implements coreStream method with standardized event interface ✅
  • Yields reasoning events in standardized format when applicable ✅
  • Proper error handling with try-catch blocks ✅
  • Successfully handles both new format reasoning (thought: true) and legacy format ( tags) ✅

The main area for improvement is eliminating the code duplication in the completions method.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bugfix/gemini-thinking

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)

619-665: Well-implemented response processor with dual format support

The method correctly handles both new and legacy reasoning formats. Good defensive programming with proper null checks.

Consider extracting the legacy format parsing into a separate helper method for better readability:

+  private extractLegacyThinkingContent(text: string): { content: string; reasoning: string } {
+    const thinkStart = text.indexOf('<think>')
+    const thinkEnd = text.indexOf('</think>')
+    
+    if (thinkStart === -1 || thinkEnd <= thinkStart) {
+      return { content: text, reasoning: '' }
+    }
+    
+    const reasoning = text.substring(thinkStart + 7, thinkEnd).trim()
+    const beforeThink = text.substring(0, thinkStart).trim()
+    const afterThink = text.substring(thinkEnd + 8).trim()
+    const content = [beforeThink, afterThink].filter(Boolean).join('\n')
+    
+    return { content, reasoning }
+  }

940-942: Properly implements streaming reasoning support

The implementation correctly handles reasoning content in the new format and properly uses the model configuration.

Consider using a more descriptive log message and appropriate log level:

-    console.log('modelConfig', modelConfig, modelId)
+    console.debug('[GeminiProvider.coreStream] Model config:', { modelId, reasoning: modelConfig.reasoning })
-      console.log('chunk.candidates', JSON.stringify(chunk.candidates, null, 2))
+      console.debug('[GeminiProvider.coreStream] Chunk candidates:', JSON.stringify(chunk.candidates, null, 2))

Also applies to: 987-987, 1025-1059

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ccbaaaf and 44dc259.

📒 Files selected for processing (5)
  • CLAUDE.md (1 hunks)
  • src/main/presenter/configPresenter/modelConfig.ts (6 hunks)
  • src/main/presenter/llmProviderPresenter/index.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (13 hunks)
  • src/shared/presenter.d.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (13)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/shared/presenter.d.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/shared/presenter.d.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/shared/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Shared types in src/shared/

Files:

  • src/shared/presenter.d.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/shared/presenter.d.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/shared/*.d.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

The shared/*.d.ts files are used to define the types of objects exposed by the main process to the renderer process

Files:

  • src/shared/presenter.d.ts
src/shared/**/*.{ts,tsx,d.ts}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

共享类型定义放在 shared 目录

Files:

  • src/shared/presenter.d.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Centralize configuration in configPresenter/

Files:

  • src/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (5)
CLAUDE.md (13)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.

src/shared/presenter.d.ts (2)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

src/main/presenter/llmProviderPresenter/index.ts (11)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.

src/main/presenter/configPresenter/modelConfig.ts (4)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.

src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (10)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.

🧬 Code Graph Analysis (1)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (1)
src/shared/presenter.d.ts (1)
  • LLMResponse (757-774)
🪛 GitHub Actions: PR Check
CLAUDE.md

[warning] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.

🔇 Additional comments (8)
src/shared/presenter.d.ts (1)

128-131: Well-structured interface enhancement

The addition of the isUserDefined flag is clean and well-documented. Making it optional ensures backward compatibility.

src/main/presenter/llmProviderPresenter/index.ts (1)

247-266: Excellent configuration merging logic

The refactored logic clearly distinguishes between user-defined and default configurations. User settings are strictly applied when isUserDefined is true, while default configs allow model capabilities to take precedence. This provides good flexibility while respecting user intent.

src/main/presenter/configPresenter/modelConfig.ts (3)

20-69: Robust cache key sanitization implementation

The sanitization logic effectively handles special characters that could interfere with electron-store's JSON parsing. The bidirectional conversion is well-implemented.

Note: If provider/model IDs ever contain the placeholder patterns (e.g., "DOT"), the deserialization could be incorrect. This is unlikely but worth documenting.


90-171: Well-structured configuration retrieval logic

The refactored getModelConfig method provides clear priority handling and properly tracks the configuration source. The addition of the isUserDefined flag enables downstream components to make informed decisions about configuration application.


180-180: Consistent cache key handling across all methods

All methods have been properly updated to use the new sanitized cache key generation and parsing. The use of parseCacheKey in getProviderModelConfigs ensures correct ID extraction.

Also applies to: 198-198, 231-235, 253-253

src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (3)

258-258: LGTM!

Correctly disables reasoning for title generation as it's not needed for this use case.


808-812: Good use of the centralized response processor

Both methods correctly disable reasoning for their use cases and properly utilize the processGeminiResponse method for consistent response handling.

Also applies to: 836-839


1184-1184: Correctly disables reasoning for image generation

The explicit false parameter appropriately indicates that image generation doesn't require reasoning capabilities.

@zerob13 zerob13 merged commit 53e0e7a into dev Jul 26, 2025
2 checks passed
zerob13 added a commit that referenced this pull request Jul 27, 2025
* (WIP) feat: add Builtin Knowledge Server and settings integration

* feat: add multiple languages translation

* feat: enhance BuiltinKnowledgeSettings with model selection and update translations

* feat: update BuiltinKnowledgeSettings with enhanced configuration options and translations

* feat: update knowledge base settings to use 'builtinKnowledge' and enhance BuiltinKnowledgeSettings with URL query parameter handling

* feat: enhance BuiltinKnowledgeSettings with model selection and error handling for missing models

* feat: add confirmation dialog and error messages for removing built-in knowledge configurations

* props

* [WIP] feat: implement KnowledgePresenter and related embedding functionality

* [WIP] feat: add KnowledgeConfHelper for managing knowledge base configurations

* [WIP] feat: log new knowledge config additions in KnowledgePresenter

* [WIP] feat: enhance knowledge base settings and descriptions across components

* [WIP] feat: enhance Built-in Knowledge settings and descriptions, add advanced options and tooltips

* [WIP] feat: add dimensionsHelper to settings for better user guidance on embedding dimensions

* [WIP] feat: add getDimensions method and update embedding handling across providers

* [wip] feat: enhance embedding handling by adding error handling and resetting model selection in settings

* [WIP] feat: refactor embedding handling to use modelId and providerId, enhance KnowledgePresenter integration

* [WIP] feat: update KnowledgePresenter and LLMProviderPresenter to improve embedding handling and error logging

* [WIP] feat: enhance BuiltinKnowledgeSettings with additional parameters and loading logic for better user experience

* [WIP] feat: enhance KnowledgePresenter to handle deleted configs and improve reset logic

* [WIP] feat: update LLMProviderPresenter and OllamaProvider to enhance model listing with additional configuration properties

* [WIP] feat: enhance Ollama model integration by updating local models to include dynamic configuration retrieval

* [WIP] fix: update getRagApplication to include baseURL in Embeddings instantiation

* [WIP] feat: update getDimensions method to return structured response with error handling

* [WIP] feat: enhance BuiltinKnowledgeSettings with dynamic dimension detection and loading state

* feat: add duration to toast notifications for improved user feedback

* feat: add BuiltinKnowledge file upload box

* feat: update TODO list with additional parameters and logic improvements for BuiltinKnowledgeSettings and OllamaProvider

* feat: add delay duration to tooltips for improved user experience

* feat: add BuiltinKnowledge file reload button

* feat: limit BuiltinKnowledge file types

* feat: add new BuiltinKnowledge form items

* fix: fix BuiltInKnowledge embedding modelId

* 还原lucide-vue-next版本提升的修改

* fix: fix BuiltInKnowledge rerank form item

* [WIP] refactor: update knowledge base configuration to use BuiltinKnowledgeConfig and remove unused embedding classes (duckdb does not provide an binrary extension for windows)

* chore: remove unused llm-tools embedjs dependencies from package.json

* feat: implement DuckDBPresenter for vector database operations (make sure duckdb extension vss has been installed)

* refactor: update import statements to use default imports for fs and path

* feat: add BuiltinKnowledge form Information display

* refactor: restructure postinstall script for clarity and improved extension installation process

* refactor: update icon in BuiltinKnowledgeSettings and change v-show to v-if in KnowledgeBaseSettings; add file type acceptance in KnowledgeFile

* refactor: simplify file icon retrieval by centralizing logic in getMimeTypeIcon utility function

* refactor: enhance type safety for builtinKnowledgeDetail and improve code readability in KnowledgeBaseSettings and KnowledgeFile components

* fix: add optional chaining for builtinKnowledgeDetail description to prevent potential runtime errors

* feat: add KnowledgeFileMessage type and file management methods to IKnowledgePresenter interface

* feat: enhance DuckDBPresenter with file management methods and update IVectorDatabasePresenter interface

* refactor: rename methods and update table names in DuckDBPresenter for clarity and consistency

* feat: implement file management methods in RagPresenter and update IKnowledgePresenter interface

* feat: access BuiltinKnowledge file interface

* fix: fix prompt information error

* fix: improve error toast description for file upload failure

* feat: add file management methods and enhance interfaces in presenters; update file handling logic

* feat: add RAG_EVENTS for file update notifications; implement vector utility functions

* feat: enhance LLM dimension handling and add normalization support; update related components and translations

* feat: update vector database handling to include normalization support; refactor related methods

* feat: add dayjs dependency for time formatting

* feat: add a listener for FILE_UPDATED

* feat: change the params format

* feat: change callback function

* fix: resolve merge conflicts in localization files

* feat(knowledge): Implement file listing and fix embedding parameters

* feat: change loadlist after file upload and file delete

* fix(knowledge): correct timestamp storage and refactor database interaction

* fix: remove unnecessary nextTick in reAddFile

* fix: remove duplicate loadList in deleteFile

* feat(knowledge): enhance file handling with status updates and event emissions

* feat: add similarity query functionality to RagPresenter and DuckDBPresenter

* feat: implement similarity query in BuiltinKnowledgeServer and update KnowledgeFile component

* feat: enhance BuiltinKnowledge module with detailed architecture and design documentation

* feat: remove part of builtinKnowledge base info display

* fix: fix file status switching bug

* feat: add builtinKnowledge file search

* fix: reemove redundant div

* feat: enhance file handling process in BuiltinKnowledge design with detailed flow for file insertion and retrieval

* feat: update BuiltinKnowledge design document with refined file handling and retrieval processes

* feat: refactor BuiltinKnowledge module by replacing RagPresenter with KnowledgeStorePresenter and updating related components

* feat: add builtinKnowledge file search score

* feat: enhance error handling in file upload and re-upload processes in KnowledgeFile component

* fix: fix overly long file names

* fix: fix overly long file names

* refactor: simplify checkpoint logic in DuckDBPresenter open method

* feat: add @langchain/core dependency to enhance functionality

* fix: update file extension handling to use correct variable name

* feat: add crash reporter initialization for error tracking

* fix: enhance logging and error handling in DuckDBPresenter methods

* fix: move crash reporter initialization inside logging check

* feat: add toast messages for model status and L2 normalization support in multiple languages

* refactor: simplify fileTask method by removing unnecessary promise wrapping and adding comments

* refactor: update model handling by removing unnecessary ModelConfig references and enhancing model info structure

* fix: update company name format in crash reporter configuration

* fix: fix embedding model default settings and revert ModelConfigItem changed

* fix: cancel crash report

* fix: fix pulling model type not assignable problem

* fix: remove unneccessary files

* fix: remove unnecessary files

* fix: block option rerank model (not implemented yet)

* fix: dynamically decide whether to show model customization configuration button

* fix: remove useless i18n translations

* fix: remove useless dependencies and improve definitions

* perf: imporve knowledgePresenter resource release

* perf: convert to async function for better error handling

* perf: convert to async function for better error handling

* perf: imporve vector utils

* fix: fix error words

* (WIP) feat: selectively enable mcp toolsets

* perf: mark the interrupted task as a user-cancelled task when app startup

* perf: add try-catch to enhance program stability

* fix: declared but never read error

* fix: missing attr file_id when insert vector(s)

* perf: skip duckdb vss extension installation on macOS

* fix: remove bad references

* perf: disable auto install duckdb vss extension
1. will cause macOS sign problem
2. will increase 40Mb for build

* perf: remove langchain from package, reduce package size

* fix: declared but never read error

* perf: use Bipolar Quadratic Mapping algorithm to ensure that the vector confidence is between [0,1]

* perf: a more appropriate scaling factor

* perf: knowledge config  update logic

* fix: update text fixed

* fix: lint

* feat:Add Groq as Provider

* update groq.svg

* update groqProvider.ts

* (WIP) perf: enhance knowledge management with chunk processing and task scheduling features

* feat: remove python code run on js

* (WIP) feat: add clearDirtyData method to clean up orphaned vectors and chunks

* (WIP) feat: enhance DuckDBPresenter with logging and new insertVectors method; update KnowledgeStorePresenter for chunk processing and status management

* feat: refactor task management in KnowledgeTaskPresenter; enhance chunk processing and status handling in KnowledgeStorePresenter

* feat: add enabledMcpTools field to conversation for controlling MCP tools

* feat: filter MCP tools by enabledMcpTools

* refactor: update task management and chunk processing in KnowledgePresenter and KnowledgeTaskPresenter; enhance error handling and metadata management

* feat: enhance DuckDBPresenter and KnowledgeStorePresenter with error handling; update task management and chunk processing

* feat: enhance task management in KnowledgeTaskPresenter; improve error handling and processing flow in KnowledgeStorePresenter; update file list handling in KnowledgeFile component

* feat: refactor toggle logic for MCP service and tool state

* feat: enhance file handling in KnowledgeStorePresenter; improve error handling and metadata management in KnowledgeFile and presenter.d.ts

* feat: update DuckDBPresenter and presenter.d.ts; enhance transaction management and introduce new task status summary interface

* refactor: remove obsolete RAG event constants for file progress, chunk completion, and task queue status

* feat: add file progress tracking and event emission for file processing updates

* fix: update DuckDB dependency to version 1.3.2-alpha.25; enhance database cleanup logic in KnowledgePresenter

* feat: enhance KnowledgePresenter configuration management; improve store presenter handling and update method signature

* feat: add dialog handling with DialogPresenter and MessageDialog component

* feat: enhance dialog handling with improved response management and new closeable option

* feat: refactor dialog handling to support timeout and response management with enhanced type definitions

* feat: update dialog request types for consistency and clarity in MessageDialog component

* feat: enhance MessageDialog component with i18n support for descriptions and improve dialog timeout handling

* feat: enhance dialog handling with improved error management and response structure

* feat: improve dialog error handling and response structure in DialogPresenter

* fix: e2b key not working

* (WIP) perf: enhance knowledge management with chunk processing and task scheduling features

* feat: implement task management features for pausing and resuming tasks in DuckDB and Knowledge presenters

* feat: implement database migration and metadata management in DuckDBPresenter

* fix: ensure database version is set after migration completion

* update githubCopilotProvider

* update Copilot Model

* feat: Refactor Knowledge Presenter and related components

- Updated KnowledgePresenter design document to reflect new architecture and features, including improved lifecycle management and event handling.
- Enhanced file processing flow in KnowledgeStorePresenter to ensure immediate feedback and error handling during file reading.
- Modified KnowledgeFile.vue to support additional file types and improve file status handling in the UI.
- Improved configuration management for Knowledge Presenter, allowing for better integration and user experience.

* use provider check if model id is not provided

* fix: reorder parameters in getEmbeddings method for consistency across providers

* feat: add export markdown

* check copilot provider by model

* update GitHubCopilotOAuth.vue

* fix: remove redundant 'redetectDimensions' translations from multiple language settings

* wip: better style

* wip: fix worker

* chore: remove unuse code

* feat: add i18n

* fix: format

* fix: convert uploadedAt to string for consistent data handling

* fix: lint

* docs: add comprehensive documentation for Dialog module and its components

* fix: i18n and ai review

* Update src/main/events.ts

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update src/main/lib/textsplitters/index.ts

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update src/renderer/src/lib/utils.ts

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* refactor: improve error handling and logging in dialog and knowledge task presenters; update text sanitization and localization

* fix: #623

* feat: change function name

* feat: add empty data display

* feat: add click outside to close sidebar functionality

* style(threads): optimize the operation logic of new sessions (#633)

* style(threads): optimize the operation logic of new sessions

* chore: format code

* chore(ci): add code lint check (#634)

* chore(ci): add code lint check

* chore: remove linting steps from build workflow; add linting steps to PR check workflow

* fix: resolve sidebar toggle button conflict (#637)

* fix: Bugfix/gemini thinking (#639)

* fix: gemini reasoning by config

* feat: support gemini thinking

* fix: user define model config first

* fix: format

* chore: ignore md for format

* doc: remove empty line

* fix: ai review

* perf(ChatConfig): Set the TooltipProvider component to add a delay duration of 200& update the include configuration in the tsconfig.web.json file (#640)

* feat: Add thinking budget support for Gemini 2.5 series models (#643)

* chore: update 0.2.7

---------

Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com>
Co-authored-by: ysli <sqsyli@qq.com>
Co-authored-by: zhangmo8 <wegi866@gmail.com>
Co-authored-by: dw9 <xweimvp@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: 阿菜 Cai <1064425721@qq.com>
Co-authored-by: 阿菜 Cai <jimmyrss1102@gmail.com>
Co-authored-by: flingyp <flingyp@163.com>
@zerob13 zerob13 deleted the bugfix/gemini-thinking branch September 21, 2025 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants