-
Notifications
You must be signed in to change notification settings - Fork 51
Refactor AI model routing and add ImageViewer portal rendering #201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The ImageViewer modal was rendering inside message blocks instead of full-screen when clicking images in already-sent messages. This was caused by contentVisibility: auto on message containers creating a containing block for fixed-positioned elements. Using createPortal to render directly to document.body fixes this. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…t modes - Rename agent-fallback-model to fallback-model for use in both modes - Switch fallback from Google Gemini to moonshotai/kimi-k2.5 - Remove provider routing (vertex/ai-studio) from providers config - Simplify buildProviderOptions by removing Google-specific options - Add fallback verification logging with assistant message IDs - Track fallback duration, success status, and part types Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR consolidates AI model provider routing by removing provider-level constraints and standardizing all model mappings to OpenRouter with Google's Gemini backend. A new fallback model (Moonshot's Kimi K2.5) is introduced, and direct gateway and Google SDK dependencies are removed. The chat handler is enhanced with expanded streaming lifecycle hooks for analytics and a fallback retry mechanism for incomplete streams. A UI modal is rendered via portal to escape CSS containment. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ChatHandler
participant Stream
participant Providers
Client->>ChatHandler: initiate chat request
ChatHandler->>Providers: prepare with model config
ChatHandler->>Stream: start streaming
loop Stream Processing
Stream->>Stream: receive chunk
Stream-->>ChatHandler: onChunk event (tool calls)
ChatHandler->>ChatHandler: log analytics
end
Stream-->>ChatHandler: onFinish event
ChatHandler->>ChatHandler: record usage & cost
alt Incomplete Stream (step-start only)
ChatHandler->>ChatHandler: detect incomplete
ChatHandler->>Providers: retry with fallback-model
Providers-->>Stream: retry stream
Stream-->>ChatHandler: onFinish (fallback response)
ChatHandler->>ChatHandler: save new messages
ChatHandler->>ChatHandler: log fallback metadata
else Complete Stream
ChatHandler->>ChatHandler: finalize response
end
ChatHandler-->>Client: emit completion event
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.