-
Notifications
You must be signed in to change notification settings - Fork 625
Bugfix/gemini thinking #639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update introduces a robust cache key mechanism for model configurations, enhances the merging logic of model attributes with configurations, and significantly improves reasoning content extraction and streaming in the Gemini provider. It also extends the Changes
Sequence Diagram(s)Gemini Provider: Reasoning Content Extraction and StreamingsequenceDiagram
participant Client
participant GeminiProvider
participant GeminiAPI
Client->>GeminiProvider: coreStream(messages, modelId, modelConfig, ...)
GeminiProvider->>GeminiAPI: Send streaming request (reasoning flag as per config)
loop For each streamed chunk
GeminiAPI-->>GeminiProvider: Streamed chunk (may contain reasoning)
alt Chunk contains reasoning part
GeminiProvider->>Client: Yield reasoning event
end
GeminiProvider->>Client: Yield normal content event
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (8)**/*.{ts,tsx,js,jsx,vue}📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
**/*.{ts,tsx}📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
src/main/**/*.ts📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
src/main/presenter/**/*.ts📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
src/main/presenter/llmProviderPresenter/providers/*.ts📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
**/*.{js,jsx,ts,tsx}📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
Files:
src/{main,renderer}/**/*.ts📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
Files:
src/main/**/*.{ts,js,tsx,jsx}📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
Files:
🧠 Learnings (1)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (14)Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: CR Learnt from: neoragex2002 Learnt from: CR Learnt from: CR 🧬 Code Graph Analysis (1)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (6)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)
619-665: Well-implemented response processor with dual format supportThe method correctly handles both new and legacy reasoning formats. Good defensive programming with proper null checks.
Consider extracting the legacy format parsing into a separate helper method for better readability:
+ private extractLegacyThinkingContent(text: string): { content: string; reasoning: string } { + const thinkStart = text.indexOf('<think>') + const thinkEnd = text.indexOf('</think>') + + if (thinkStart === -1 || thinkEnd <= thinkStart) { + return { content: text, reasoning: '' } + } + + const reasoning = text.substring(thinkStart + 7, thinkEnd).trim() + const beforeThink = text.substring(0, thinkStart).trim() + const afterThink = text.substring(thinkEnd + 8).trim() + const content = [beforeThink, afterThink].filter(Boolean).join('\n') + + return { content, reasoning } + }
940-942: Properly implements streaming reasoning supportThe implementation correctly handles reasoning content in the new format and properly uses the model configuration.
Consider using a more descriptive log message and appropriate log level:
- console.log('modelConfig', modelConfig, modelId) + console.debug('[GeminiProvider.coreStream] Model config:', { modelId, reasoning: modelConfig.reasoning })- console.log('chunk.candidates', JSON.stringify(chunk.candidates, null, 2)) + console.debug('[GeminiProvider.coreStream] Chunk candidates:', JSON.stringify(chunk.candidates, null, 2))Also applies to: 987-987, 1025-1059
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
CLAUDE.md(1 hunks)src/main/presenter/configPresenter/modelConfig.ts(6 hunks)src/main/presenter/llmProviderPresenter/index.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts(13 hunks)src/shared/presenter.d.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (13)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Use English for logs and comments
Files:
src/shared/presenter.d.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Strict type checking enabled for TypeScript
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/shared/presenter.d.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/shared/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Shared types in src/shared/
Files:
src/shared/presenter.d.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/shared/presenter.d.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/shared/*.d.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
The shared/*.d.ts files are used to define the types of objects exposed by the main process to the renderer process
Files:
src/shared/presenter.d.ts
src/shared/**/*.{ts,tsx,d.ts}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
共享类型定义放在
shared目录
Files:
src/shared/presenter.d.ts
src/main/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
One presenter per functional domain
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/llmProviderPresenter/index.ts
📄 CodeRabbit Inference Engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/index.ts:src/main/presenter/llmProviderPresenter/index.tsshould manage the overall Agent loop, conversation history, tool execution viaMcpPresenter, and frontend communication viaeventBus.
The main Agent loop inllmProviderPresenter/index.tsshould handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop withneedContinueConversationandtoolCallCount.
The main Agent loop should send standardizedSTREAM_EVENTS(RESPONSE,END,ERROR) to the frontend viaeventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Files:
src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Centralize configuration in configPresenter/
Files:
src/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...
Files:
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (5)
CLAUDE.md (13)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
src/shared/presenter.d.ts (2)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
src/main/presenter/llmProviderPresenter/index.ts (11)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
src/main/presenter/configPresenter/modelConfig.ts (4)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (10)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
🧬 Code Graph Analysis (1)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (1)
src/shared/presenter.d.ts (1)
LLMResponse(757-774)
🪛 GitHub Actions: PR Check
CLAUDE.md
[warning] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.
🔇 Additional comments (8)
src/shared/presenter.d.ts (1)
128-131: Well-structured interface enhancementThe addition of the
isUserDefinedflag is clean and well-documented. Making it optional ensures backward compatibility.src/main/presenter/llmProviderPresenter/index.ts (1)
247-266: Excellent configuration merging logicThe refactored logic clearly distinguishes between user-defined and default configurations. User settings are strictly applied when
isUserDefinedis true, while default configs allow model capabilities to take precedence. This provides good flexibility while respecting user intent.src/main/presenter/configPresenter/modelConfig.ts (3)
20-69: Robust cache key sanitization implementationThe sanitization logic effectively handles special characters that could interfere with electron-store's JSON parsing. The bidirectional conversion is well-implemented.
Note: If provider/model IDs ever contain the placeholder patterns (e.g., "DOT"), the deserialization could be incorrect. This is unlikely but worth documenting.
90-171: Well-structured configuration retrieval logicThe refactored
getModelConfigmethod provides clear priority handling and properly tracks the configuration source. The addition of theisUserDefinedflag enables downstream components to make informed decisions about configuration application.
180-180: Consistent cache key handling across all methodsAll methods have been properly updated to use the new sanitized cache key generation and parsing. The use of
parseCacheKeyingetProviderModelConfigsensures correct ID extraction.Also applies to: 198-198, 231-235, 253-253
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (3)
258-258: LGTM!Correctly disables reasoning for title generation as it's not needed for this use case.
808-812: Good use of the centralized response processorBoth methods correctly disable reasoning for their use cases and properly utilize the
processGeminiResponsemethod for consistent response handling.Also applies to: 836-839
1184-1184: Correctly disables reasoning for image generationThe explicit
falseparameter appropriately indicates that image generation doesn't require reasoning capabilities.
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
Outdated
Show resolved
Hide resolved
* (WIP) feat: add Builtin Knowledge Server and settings integration * feat: add multiple languages translation * feat: enhance BuiltinKnowledgeSettings with model selection and update translations * feat: update BuiltinKnowledgeSettings with enhanced configuration options and translations * feat: update knowledge base settings to use 'builtinKnowledge' and enhance BuiltinKnowledgeSettings with URL query parameter handling * feat: enhance BuiltinKnowledgeSettings with model selection and error handling for missing models * feat: add confirmation dialog and error messages for removing built-in knowledge configurations * props * [WIP] feat: implement KnowledgePresenter and related embedding functionality * [WIP] feat: add KnowledgeConfHelper for managing knowledge base configurations * [WIP] feat: log new knowledge config additions in KnowledgePresenter * [WIP] feat: enhance knowledge base settings and descriptions across components * [WIP] feat: enhance Built-in Knowledge settings and descriptions, add advanced options and tooltips * [WIP] feat: add dimensionsHelper to settings for better user guidance on embedding dimensions * [WIP] feat: add getDimensions method and update embedding handling across providers * [wip] feat: enhance embedding handling by adding error handling and resetting model selection in settings * [WIP] feat: refactor embedding handling to use modelId and providerId, enhance KnowledgePresenter integration * [WIP] feat: update KnowledgePresenter and LLMProviderPresenter to improve embedding handling and error logging * [WIP] feat: enhance BuiltinKnowledgeSettings with additional parameters and loading logic for better user experience * [WIP] feat: enhance KnowledgePresenter to handle deleted configs and improve reset logic * [WIP] feat: update LLMProviderPresenter and OllamaProvider to enhance model listing with additional configuration properties * [WIP] feat: enhance Ollama model integration by updating local models to include dynamic configuration retrieval * [WIP] fix: update getRagApplication to include baseURL in Embeddings instantiation * [WIP] feat: update getDimensions method to return structured response with error handling * [WIP] feat: enhance BuiltinKnowledgeSettings with dynamic dimension detection and loading state * feat: add duration to toast notifications for improved user feedback * feat: add BuiltinKnowledge file upload box * feat: update TODO list with additional parameters and logic improvements for BuiltinKnowledgeSettings and OllamaProvider * feat: add delay duration to tooltips for improved user experience * feat: add BuiltinKnowledge file reload button * feat: limit BuiltinKnowledge file types * feat: add new BuiltinKnowledge form items * fix: fix BuiltInKnowledge embedding modelId * 还原lucide-vue-next版本提升的修改 * fix: fix BuiltInKnowledge rerank form item * [WIP] refactor: update knowledge base configuration to use BuiltinKnowledgeConfig and remove unused embedding classes (duckdb does not provide an binrary extension for windows) * chore: remove unused llm-tools embedjs dependencies from package.json * feat: implement DuckDBPresenter for vector database operations (make sure duckdb extension vss has been installed) * refactor: update import statements to use default imports for fs and path * feat: add BuiltinKnowledge form Information display * refactor: restructure postinstall script for clarity and improved extension installation process * refactor: update icon in BuiltinKnowledgeSettings and change v-show to v-if in KnowledgeBaseSettings; add file type acceptance in KnowledgeFile * refactor: simplify file icon retrieval by centralizing logic in getMimeTypeIcon utility function * refactor: enhance type safety for builtinKnowledgeDetail and improve code readability in KnowledgeBaseSettings and KnowledgeFile components * fix: add optional chaining for builtinKnowledgeDetail description to prevent potential runtime errors * feat: add KnowledgeFileMessage type and file management methods to IKnowledgePresenter interface * feat: enhance DuckDBPresenter with file management methods and update IVectorDatabasePresenter interface * refactor: rename methods and update table names in DuckDBPresenter for clarity and consistency * feat: implement file management methods in RagPresenter and update IKnowledgePresenter interface * feat: access BuiltinKnowledge file interface * fix: fix prompt information error * fix: improve error toast description for file upload failure * feat: add file management methods and enhance interfaces in presenters; update file handling logic * feat: add RAG_EVENTS for file update notifications; implement vector utility functions * feat: enhance LLM dimension handling and add normalization support; update related components and translations * feat: update vector database handling to include normalization support; refactor related methods * feat: add dayjs dependency for time formatting * feat: add a listener for FILE_UPDATED * feat: change the params format * feat: change callback function * fix: resolve merge conflicts in localization files * feat(knowledge): Implement file listing and fix embedding parameters * feat: change loadlist after file upload and file delete * fix(knowledge): correct timestamp storage and refactor database interaction * fix: remove unnecessary nextTick in reAddFile * fix: remove duplicate loadList in deleteFile * feat(knowledge): enhance file handling with status updates and event emissions * feat: add similarity query functionality to RagPresenter and DuckDBPresenter * feat: implement similarity query in BuiltinKnowledgeServer and update KnowledgeFile component * feat: enhance BuiltinKnowledge module with detailed architecture and design documentation * feat: remove part of builtinKnowledge base info display * fix: fix file status switching bug * feat: add builtinKnowledge file search * fix: reemove redundant div * feat: enhance file handling process in BuiltinKnowledge design with detailed flow for file insertion and retrieval * feat: update BuiltinKnowledge design document with refined file handling and retrieval processes * feat: refactor BuiltinKnowledge module by replacing RagPresenter with KnowledgeStorePresenter and updating related components * feat: add builtinKnowledge file search score * feat: enhance error handling in file upload and re-upload processes in KnowledgeFile component * fix: fix overly long file names * fix: fix overly long file names * refactor: simplify checkpoint logic in DuckDBPresenter open method * feat: add @langchain/core dependency to enhance functionality * fix: update file extension handling to use correct variable name * feat: add crash reporter initialization for error tracking * fix: enhance logging and error handling in DuckDBPresenter methods * fix: move crash reporter initialization inside logging check * feat: add toast messages for model status and L2 normalization support in multiple languages * refactor: simplify fileTask method by removing unnecessary promise wrapping and adding comments * refactor: update model handling by removing unnecessary ModelConfig references and enhancing model info structure * fix: update company name format in crash reporter configuration * fix: fix embedding model default settings and revert ModelConfigItem changed * fix: cancel crash report * fix: fix pulling model type not assignable problem * fix: remove unneccessary files * fix: remove unnecessary files * fix: block option rerank model (not implemented yet) * fix: dynamically decide whether to show model customization configuration button * fix: remove useless i18n translations * fix: remove useless dependencies and improve definitions * perf: imporve knowledgePresenter resource release * perf: convert to async function for better error handling * perf: convert to async function for better error handling * perf: imporve vector utils * fix: fix error words * (WIP) feat: selectively enable mcp toolsets * perf: mark the interrupted task as a user-cancelled task when app startup * perf: add try-catch to enhance program stability * fix: declared but never read error * fix: missing attr file_id when insert vector(s) * perf: skip duckdb vss extension installation on macOS * fix: remove bad references * perf: disable auto install duckdb vss extension 1. will cause macOS sign problem 2. will increase 40Mb for build * perf: remove langchain from package, reduce package size * fix: declared but never read error * perf: use Bipolar Quadratic Mapping algorithm to ensure that the vector confidence is between [0,1] * perf: a more appropriate scaling factor * perf: knowledge config update logic * fix: update text fixed * fix: lint * feat:Add Groq as Provider * update groq.svg * update groqProvider.ts * (WIP) perf: enhance knowledge management with chunk processing and task scheduling features * feat: remove python code run on js * (WIP) feat: add clearDirtyData method to clean up orphaned vectors and chunks * (WIP) feat: enhance DuckDBPresenter with logging and new insertVectors method; update KnowledgeStorePresenter for chunk processing and status management * feat: refactor task management in KnowledgeTaskPresenter; enhance chunk processing and status handling in KnowledgeStorePresenter * feat: add enabledMcpTools field to conversation for controlling MCP tools * feat: filter MCP tools by enabledMcpTools * refactor: update task management and chunk processing in KnowledgePresenter and KnowledgeTaskPresenter; enhance error handling and metadata management * feat: enhance DuckDBPresenter and KnowledgeStorePresenter with error handling; update task management and chunk processing * feat: enhance task management in KnowledgeTaskPresenter; improve error handling and processing flow in KnowledgeStorePresenter; update file list handling in KnowledgeFile component * feat: refactor toggle logic for MCP service and tool state * feat: enhance file handling in KnowledgeStorePresenter; improve error handling and metadata management in KnowledgeFile and presenter.d.ts * feat: update DuckDBPresenter and presenter.d.ts; enhance transaction management and introduce new task status summary interface * refactor: remove obsolete RAG event constants for file progress, chunk completion, and task queue status * feat: add file progress tracking and event emission for file processing updates * fix: update DuckDB dependency to version 1.3.2-alpha.25; enhance database cleanup logic in KnowledgePresenter * feat: enhance KnowledgePresenter configuration management; improve store presenter handling and update method signature * feat: add dialog handling with DialogPresenter and MessageDialog component * feat: enhance dialog handling with improved response management and new closeable option * feat: refactor dialog handling to support timeout and response management with enhanced type definitions * feat: update dialog request types for consistency and clarity in MessageDialog component * feat: enhance MessageDialog component with i18n support for descriptions and improve dialog timeout handling * feat: enhance dialog handling with improved error management and response structure * feat: improve dialog error handling and response structure in DialogPresenter * fix: e2b key not working * (WIP) perf: enhance knowledge management with chunk processing and task scheduling features * feat: implement task management features for pausing and resuming tasks in DuckDB and Knowledge presenters * feat: implement database migration and metadata management in DuckDBPresenter * fix: ensure database version is set after migration completion * update githubCopilotProvider * update Copilot Model * feat: Refactor Knowledge Presenter and related components - Updated KnowledgePresenter design document to reflect new architecture and features, including improved lifecycle management and event handling. - Enhanced file processing flow in KnowledgeStorePresenter to ensure immediate feedback and error handling during file reading. - Modified KnowledgeFile.vue to support additional file types and improve file status handling in the UI. - Improved configuration management for Knowledge Presenter, allowing for better integration and user experience. * use provider check if model id is not provided * fix: reorder parameters in getEmbeddings method for consistency across providers * feat: add export markdown * check copilot provider by model * update GitHubCopilotOAuth.vue * fix: remove redundant 'redetectDimensions' translations from multiple language settings * wip: better style * wip: fix worker * chore: remove unuse code * feat: add i18n * fix: format * fix: convert uploadedAt to string for consistent data handling * fix: lint * docs: add comprehensive documentation for Dialog module and its components * fix: i18n and ai review * Update src/main/events.ts Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update src/main/lib/textsplitters/index.ts Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update src/renderer/src/lib/utils.ts Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * refactor: improve error handling and logging in dialog and knowledge task presenters; update text sanitization and localization * fix: #623 * feat: change function name * feat: add empty data display * feat: add click outside to close sidebar functionality * style(threads): optimize the operation logic of new sessions (#633) * style(threads): optimize the operation logic of new sessions * chore: format code * chore(ci): add code lint check (#634) * chore(ci): add code lint check * chore: remove linting steps from build workflow; add linting steps to PR check workflow * fix: resolve sidebar toggle button conflict (#637) * fix: Bugfix/gemini thinking (#639) * fix: gemini reasoning by config * feat: support gemini thinking * fix: user define model config first * fix: format * chore: ignore md for format * doc: remove empty line * fix: ai review * perf(ChatConfig): Set the TooltipProvider component to add a delay duration of 200& update the include configuration in the tsconfig.web.json file (#640) * feat: Add thinking budget support for Gemini 2.5 series models (#643) * chore: update 0.2.7 --------- Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: ysli <sqsyli@qq.com> Co-authored-by: zhangmo8 <wegi866@gmail.com> Co-authored-by: dw9 <xweimvp@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: 阿菜 Cai <1064425721@qq.com> Co-authored-by: 阿菜 Cai <jimmyrss1102@gmail.com> Co-authored-by: flingyp <flingyp@163.com>
支持 gemini 设置是否需要思考
并且修复了用户自定义model config 的bug
Supports Gemini settings for whether thinking is required
fixed the bug in user-defined model configuration
Summary by CodeRabbit
New Features
Bug Fixes
Chores