-
Notifications
You must be signed in to change notification settings - Fork 625
feat: add Vercel AI Gateway provider support #743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a new “Vercel AI Gateway” provider: registers it in default providers, implements VercelAIGatewayProvider extending OpenAI-compatible provider, and wires factory logic to instantiate it based on apiType 'vercel-ai-gateway'. No other providers modified. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Presenter as LLMProviderPresenter
participant Provider as VercelAIGatewayProvider
participant Base as OpenAICompatibleProvider
participant API as Vercel AI Gateway API
User->>Presenter: request (completions/summaries/generateText)
Presenter->>Provider: create & invoke method
Provider->>Base: openAICompletion(messages, modelId, params)
Base->>API: HTTP request (baseUrl https://ai-gateway.vercel.sh/v1)
API-->>Base: response
Base-->>Provider: LLMResponse
Provider-->>Presenter: LLMResponse
Presenter-->>User: LLMResponse
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/index.ts (1)
213-214: Factory wiring for 'vercel-ai-gateway' — LGTM; optional simplificationThe new case cleanly instantiates the provider and matches the config’s
apiType.Optional: since Vercel AI Gateway is OpenAI-compatible and your subclass doesn’t add behavior, you could instantiate
OpenAICompatibleProviderdirectly to reduce surface area. If you prefer keeping the subclass for future headers/behavior, current approach is fine.Optional diff (if you choose to simplify):
- case 'vercel-ai-gateway': - return new VercelAIGatewayProvider(provider, this.configPresenter) + case 'vercel-ai-gateway': + return new OpenAICompatibleProvider(provider, this.configPresenter)src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.ts (3)
10-17: Redundant override: completions delegates 1:1 to baseThis method is identical to the base implementation. Unless you plan to add custom headers/params soon, consider removing it and inherit the base behavior to reduce duplication.
- async completions( - messages: ChatMessage[], - modelId: string, - temperature?: number, - maxTokens?: number - ): Promise<LLMResponse> { - return this.openAICompletion(messages, modelId, temperature, maxTokens) - }
19-36: Summaries prompt is provider-specific and Chinese; consider centralizing/localizing
- If other providers share a common summary prompt, consider centralizing it (e.g., in base or a helper) so behavior is consistent across providers.
- Consider sourcing the language from app settings (ConfigPresenter locale) to match user language.
If you prefer to keep this provider-specific behavior, no functional issue — this is a UX consistency suggestion.
38-55: Redundant override: generateText delegates 1:1 to baseAs with
completions, this mirrors base functionality. You can remove it and rely on inheritance to keep the provider lean.- async generateText( - prompt: string, - modelId: string, - temperature?: number, - maxTokens?: number - ): Promise<LLMResponse> { - return this.openAICompletion( - [ - { - role: 'user', - content: prompt - } - ], - modelId, - temperature, - maxTokens - ) - }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/main/presenter/configPresenter/providers.ts(1 hunks)src/main/presenter/llmProviderPresenter/index.ts(2 hunks)src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Use English for logs and comments
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Strict type checking enabled for TypeScript
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
One presenter per functional domain
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Centralize configuration in configPresenter/
Files:
src/main/presenter/configPresenter/providers.ts
src/main/presenter/configPresenter/providers.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Files:
src/main/presenter/configPresenter/providers.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...
Files:
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.ts
src/main/presenter/llmProviderPresenter/index.ts
📄 CodeRabbit Inference Engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/index.ts:src/main/presenter/llmProviderPresenter/index.tsshould manage the overall Agent loop, conversation history, tool execution viaMcpPresenter, and frontend communication viaeventBus.
The main Agent loop inllmProviderPresenter/index.tsshould handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop withneedContinueConversationandtoolCallCount.
The main Agent loop should send standardizedSTREAM_EVENTS(RESPONSE,END,ERROR) to the frontend viaeventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Files:
src/main/presenter/llmProviderPresenter/index.ts
🧠 Learnings (10)
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Applied to files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Applied to files:
src/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.tssrc/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
🧬 Code Graph Analysis (2)
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.ts (3)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
OpenAICompatibleProvider(68-1356)src/shared/presenter.d.ts (3)
LLM_PROVIDER(494-514)ChatMessage(1252-1264)LLMResponse(824-841)src/main/presenter/configPresenter/index.ts (1)
ConfigPresenter(76-1197)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.ts (1)
VercelAIGatewayProvider(5-56)
🔇 Additional comments (5)
src/main/presenter/configPresenter/providers.ts (2)
186-200: Vercel AI Gateway provider entry added — LGTMThe new DEFAULT_PROVIDERS entry is consistent with existing providers (id, apiType, websites metadata). Naming and apiType align with the factory wiring.
186-200: Verify the default baseUrl/domain for Vercel AI GatewayVercel AI Gateway typically issues a project- or team-specific gateway domain. Hardcoding
https://ai-gateway.vercel.sh/v1as a default base URL may not work for users out-of-the-box. Consider leavingbaseUrlempty by default or using a placeholder example to make it clear this must be replaced.Possible adjustment:
- baseUrl: '' (force users to supply their project gateway)
- websites.defaultBaseUrl: a placeholder like
https://{your-gateway-domain}/v1I can update this if you want to standardize it with other providers that use instance-specific domains.
src/main/presenter/llmProviderPresenter/index.ts (1)
44-44: Import of VercelAIGatewayProvider — LGTMImport path and named export match the new provider file. No issues.
src/main/presenter/llmProviderPresenter/providers/vercelAIGatewayProvider.ts (2)
5-8: Provider class scaffolding — LGTMExtending OpenAICompatibleProvider is the right call for an OpenAI-compatible gateway. Constructor chaining is correct.
1-56: Confirm streaming/tool-calls work end-to-end via the gatewaySince this provider inherits
coreStreamfrom OpenAICompatibleProvider, it should work unchanged. Vercel AI Gateway usually forwards SSE and tool_call payloads transparently, but gateways can add subtle behavior. Please smoke test:
- streaming text
- native function calling tool_calls
- usage aggregation
I can help write a quick script to exercise these flows if needed.
add Vercel AI Gateway provider support

Summary by CodeRabbit