-
Notifications
You must be signed in to change notification settings - Fork 51
Daily branch 2025 11 20 #109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughReplaces multiple AI provider mappings with new Gemini/Grok endpoints, adds a subscription-aware getMaxStepsForUser to control iteration limits, refines vision model selection between ask/agent modes, makes the resume section always appended at the end of system prompts, and guards regenerate/retry flows to avoid deleting or sending empty assistant messages. Changes
Sequence DiagramsequenceDiagram
participant UI as Client
participant Handler as chat-handler
participant Processor as chat-processor
participant Providers as ai/providers
participant Prompt as system-prompt
UI->>Handler: start/process chat (mode, subscription, messages...)
Handler->>Processor: processChatMessages(mode, subscription, messages)
Processor->>Processor: getMaxStepsForUser(mode, subscription)
note right of Processor `#f9f0c1`: Agent → 20\nAsk Free → 5\nAsk Pro/Team → 10\nAsk Ultra → 15
Processor->>Processor: selectModel(mode, messageType)
alt Ask + PDF
Processor->>Providers: use ask-vision-model-for-pdfs
else Ask + Media
Processor->>Providers: use ask-vision-model
else Agent + Media
Processor->>Providers: use agent-vision-model
end
Processor->>Prompt: buildSystemPrompt(finishReason)
note right of Prompt `#e6f7ff`: Resume section now\nalways appended at end
Prompt-->>Processor: system prompt
Processor-->>Handler: respond (enforcing max steps)
Handler->>UI: deliver response
rect rgba(200,230,201,0.3)
note over Handler,Processor: Regenerate/Retry paths check last assistant message content\nand avoid deleting/sending empty assistant messages
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
lib/chat/chat-processor.ts (2)
6-31: Step-limit helper is consistent and centralizes the policy
getMaxStepsForUsercleanly captures the step policy:
- Agent: always 20 steps.
- Ask: 5 (free), 10 (pro/team), 15 (ultra).
This matches the usage in
chat-handler.tsand makes future tuning easier.One minor nit: the JSDoc line “Agent mode: Always 20 steps (for all paid users)” is slightly misleading since the helper itself doesn’t enforce “paid” (the free/agent block happens at a higher layer). You could drop the parenthetical to keep the comment strictly accurate.
128-131: Team subscriptions and “paid” moderation behavior may be inconsistentHere
getModerationResultis called withsubscription === "pro" || subscription === "ultra", so “team” subscribers are treated like “free” for the paid/uncensored flag, while elsewhere (e.g., rate limiting inchat-handler.ts) any non‑free tier is treated as paid.If “team” is meant to be a paid tier with the same moderation behavior as pro/ultra, consider updating this condition to include it; if not, a short comment explaining why “team” is excluded would avoid confusion.
lib/ai/providers.ts (1)
9-20: Provider mappings are consistent with model selection; consider typing the keysThe updated
baseProvidersentries cover all model names returned byselectModel("ask-model","ask-model-free","ask-vision-model","ask-vision-model-for-pdfs","agent-model","agent-vision-model") plus the existing title/summarization models, so there’s no obvious risk of missing provider mappings.To guard against future string mismatches between
selectModelandbaseProviders, you might eventually introduce a shared string union/type for model ids (or derive the return type ofselectModelfromkeyof typeof baseProviders) so TypeScript enforces that they stay in sync.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
lib/ai/providers.ts(1 hunks)lib/api/chat-handler.ts(2 hunks)lib/chat/chat-processor.ts(2 hunks)lib/system-prompt.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-06T13:31:08.519Z
Learnt from: RostyslavManko
Repo: hackerai-tech/hackerai PR: 21
File: app/api/chat/route.ts:121-128
Timestamp: 2025-09-06T13:31:08.519Z
Learning: The vision model (NEXT_PUBLIC_VISION_MODEL) used for processing media files also supports tool calling, so overriding to vision-model in Agent mode with media files does not break tool functionality.
Applied to files:
lib/chat/chat-processor.ts
🧬 Code graph analysis (2)
lib/chat/chat-processor.ts (1)
types/chat.ts (2)
ChatMode(6-6)SubscriptionTier(8-8)
lib/api/chat-handler.ts (1)
lib/chat/chat-processor.ts (1)
getMaxStepsForUser(11-31)
🔇 Additional comments (3)
lib/api/chat-handler.ts (1)
55-55: Dynamic step limit wiring is correct; double-check intended tier thresholdsThe new
getMaxStepsForUserimport and its use instopWhen: stepCountIs(getMaxStepsForUser(mode, subscription))correctly centralize step limits and keep agent mode gated for non‑free users via the earlier guard.Note that compared to the previous
mode === "ask" ? 5 : 20behavior, Ask mode now gives:
- free: 5 steps
- pro/team: 10 steps
- ultra: 15 steps
If that increase for paid tiers is intentional, this looks good and keeps the logic in one place.
Also applies to: 351-351
lib/system-prompt.ts (1)
294-295: Resume context placement at end of prompt looks correctAlways appending
getResumeSection(finishReason)and then filtering falsy entries ensures:
- Resume instructions are present when
finishReasonis"tool-calls"or"length".- No extra noise when there is no resume context.
- The resume section consistently comes last, after personality and memory, which matches the described intent.
lib/chat/chat-processor.ts (1)
45-63: Model selection for media/PDFs aligns with new model keysThe updated
selectModellogic:
- prefers
"ask-vision-model-for-pdfs"for PDFs in ask mode,- uses
"ask-vision-model"for other media in ask mode,- uses
"agent-vision-model"for media in agent mode,- otherwise falls back to
"ask-model-free","ask-model", or"agent-model"based on mode + subscription,is coherent and matches the new model keys defined in
lib/ai/providers.ts. ThecontainsPdfFiles && mode === "ask"check being first ensures PDF‑specific handling wins when both PDFs and other media are present.Just ensure
processMessageFilessetscontainsMediaFiles/containsPdfFilesin a way that matches this intent (e.g., PDF‑only cases in agent mode still route to the vision model if that’s desired).
- Check if last assistant message has content before deleting from DB - Only delete messages that were actually saved (have parts) - Filter out empty assistant messages in temporary chat retry - Fixes issue where clicking regenerate after an error would delete the previous valid message
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/hooks/useChatHandlers.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
app/hooks/useChatHandlers.ts (1)
convex/messages.ts (1)
deleteLastAssistantMessage(443-551)
| // Check if the last assistant message has actual content | ||
| // If it's empty or has no parts, it was never saved (error occurred) | ||
| // In this case, we shouldn't delete anything from the database | ||
| const hasContent = lastAssistant?.parts && lastAssistant.parts.length > 0; | ||
|
|
||
| if (!temporaryChatsEnabled) { | ||
| // Delete last assistant message and update todos in a single transaction | ||
| await deleteLastAssistantMessage({ chatId, todos: cleanedTodos }); | ||
| // Only delete if the last assistant message has content | ||
| // This prevents deleting previous valid messages when an error occurred | ||
| if (hasContent) { | ||
| await deleteLastAssistantMessage({ chatId, todos: cleanedTodos }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential bug: Empty assistant messages may still be saved to the database.
The guard assumes that if hasContent is false, the message was never saved to the database. However, handleStop (lines 266-276) calls saveAssistantMessage without checking whether parts is empty. This means an empty assistant message can be saved if an error occurs immediately after the stream starts.
When regenerating, if the empty message exists in the database but hasContent is false in the UI state, the deletion will be skipped. The backend will then fetch the empty assistant message from the database, which could cause incorrect regeneration context.
Consider one of these approaches:
Option 1: Always attempt deletion (simplest)
- // Check if the last assistant message has actual content
- // If it's empty or has no parts, it was never saved (error occurred)
- // In this case, we shouldn't delete anything from the database
- const hasContent = lastAssistant?.parts && lastAssistant.parts.length > 0;
-
if (!temporaryChatsEnabled) {
- // Only delete if the last assistant message has content
- // This prevents deleting previous valid messages when an error occurred
- if (hasContent) {
- await deleteLastAssistantMessage({ chatId, todos: cleanedTodos });
- }
+ // Always delete the last assistant message if it exists
+ // The mutation safely handles the case where it doesn't exist in DB
+ if (lastAssistant) {
+ await deleteLastAssistantMessage({ chatId, todos: cleanedTodos });
+ }Option 2: Prevent saving empty messages in handleStop
Modify handleStop to check lastMessage.parts?.length > 0 before calling saveAssistantMessage.
Option 3: Delete when lastAssistant exists (regardless of content)
if (!temporaryChatsEnabled) {
- // Only delete if the last assistant message has content
- // This prevents deleting previous valid messages when an error occurred
- if (hasContent) {
+ // Delete the last assistant message if one exists in the UI state
+ // This includes empty messages that may have been saved during errors
+ if (lastAssistant) {
await deleteLastAssistantMessage({ chatId, todos: cleanedTodos });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if the last assistant message has actual content | |
| // If it's empty or has no parts, it was never saved (error occurred) | |
| // In this case, we shouldn't delete anything from the database | |
| const hasContent = lastAssistant?.parts && lastAssistant.parts.length > 0; | |
| if (!temporaryChatsEnabled) { | |
| // Delete last assistant message and update todos in a single transaction | |
| await deleteLastAssistantMessage({ chatId, todos: cleanedTodos }); | |
| // Only delete if the last assistant message has content | |
| // This prevents deleting previous valid messages when an error occurred | |
| if (hasContent) { | |
| await deleteLastAssistantMessage({ chatId, todos: cleanedTodos }); | |
| } | |
| if (!temporaryChatsEnabled) { | |
| // Always delete the last assistant message if it exists | |
| // The mutation safely handles the case where it doesn't exist in DB | |
| if (lastAssistant) { | |
| await deleteLastAssistantMessage({ chatId, todos: cleanedTodos }); | |
| } |
🤖 Prompt for AI Agents
In app/hooks/useChatHandlers.ts around lines 301-311 (and related handleStop at
~266-276), the UI assumes empty assistant parts mean the message was never
saved, but handleStop may save an empty assistant message; modify handleStop to
avoid persisting empty messages by checking lastMessage.parts?.length > 0 before
calling saveAssistantMessage, and only call saveAssistantMessage when there is
at least one non-empty part; this prevents orphan empty records and keeps the
existing deletion guard intact.
| // For temporary chats, filter out empty assistant message if present (from error) | ||
| // Check if last message is an empty assistant message | ||
| const lastMessage = messages[messages.length - 1]; | ||
| const isLastMessageEmptyAssistant = | ||
| lastMessage?.role === "assistant" && | ||
| (!lastMessage.parts || lastMessage.parts.length === 0); | ||
|
|
||
| const messagesToSend = isLastMessageEmptyAssistant | ||
| ? messages.slice(0, -1) | ||
| : messages; | ||
|
|
||
| regenerate({ | ||
| body: { | ||
| mode: chatMode, | ||
| messages, | ||
| messages: messagesToSend, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Good implementation for temporary chats, but consider persisted chats consistency.
The filtering logic for empty assistant messages in temporary chats is well-implemented with clear variable naming and proper conditional handling.
However, there's an inconsistency: persisted chats (lines 347-357) don't handle empty assistant messages at all. If an empty assistant message exists in the database (due to the issue flagged above), the backend will fetch it during retry, while temporary chats would filter it out. This creates divergent behavior between the two modes.
For consistency, consider deleting empty assistant messages in persisted chats as well:
if (!temporaryChatsEnabled) {
+ // Check if last message is an empty assistant message from an error
+ const lastMessage = messages[messages.length - 1];
+ const isLastMessageEmptyAssistant =
+ lastMessage?.role === "assistant" &&
+ (!lastMessage.parts || lastMessage.parts.length === 0);
+
+ // Delete empty assistant message if present
+ if (isLastMessageEmptyAssistant) {
+ await deleteLastAssistantMessage({ chatId, todos: cleanedTodos });
+ }
+
// For persisted chats, backend fetches from database - explicitly send no messages
regenerate({Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In app/hooks/useChatHandlers.ts around lines 347-373, persisted chats (lines
~347-357) don't handle empty assistant messages while temporary chats (lines
359-373) filter them out; make persisted behavior consistent by detecting if the
last message is an assistant with no parts (lastMessage?.role === "assistant" &&
(!lastMessage.parts || lastMessage.parts.length === 0)), then remove it from the
messages array before calling regenerate and also delete that empty message from
the database (or mark it removed) within the persisted-chat code path so retries
won't re-fetch the empty assistant message.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.