-
-
Notifications
You must be signed in to change notification settings - Fork 7
Add message edit and delete functionality #467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add message edit and delete functionality #467
Conversation
- Implement `updateMessage`, `deleteMessage`, and `deleteTrailingMessages` in database layer. - Refactor AI generation logic in `app/actions.tsx` into a reusable `processChatWorkflow` function. - Add `resubmit` and `deleteMessageAction` server actions. - Enhance `UserMessage` component with inline editing and deletion UI. - Ensure proper authentication using `getCurrentUserIdOnServer` and state management for trailing message deletion. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
WalkthroughAdds per-message ids and createdAt timestamps; introduces resubmit and deleteMessage actions; centralizes chat processing into processChatWorkflow with consolidated error handling; expands DB layer with update/delete/trailing-delete helpers; extends getModel signature and updates callers; wires id through UI components for edit/delete flows. Changes
Sequence DiagramsequenceDiagram
participant User
participant UI as "UserMessage (client)"
participant Panel as "ChatPanel / Followup (client)"
participant Server as "app/actions"
participant DB as "lib/actions/chat-db"
User->>UI: Click Edit -> open editor
UI->>Panel: Save (truncate UI state up to message)
Panel->>Server: resubmit(messageId, content, mapProvider)
Server->>DB: deleteTrailingMessages(chatId, message.createdAt)
DB-->>Server: OK
Server->>DB: updateMessage(messageId, { content, createdAt })
DB-->>Server: updated message
Server->>Server: processChatWorkflow(re-run AI flow)
Server-->>Panel: updated AI/UI state
Panel->>UI: render updated conversation
User->>UI: Click Delete
UI->>Panel: confirm & call deleteMessageAction(messageId)
Panel->>Server: deleteMessageAction(messageId)
Server->>DB: deleteMessage(messageId)
DB-->>Server: deleted
Server->>DB: deleteTrailingMessages(chatId, message.createdAt)
DB-->>Server: OK
Server-->>Panel: updated AI/UI state
Panel->>UI: render truncated conversation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||||||||
ⓘ Your approaching your monthly quota for Qodo. Upgrade your plan PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 472-488: The loop over toolOutputs is using map for side effects
which is incorrect; replace the map call with forEach on toolOutputs and keep
the body that calls aiState.update (referencing aiState.update, toolOutputs, and
groupeId) so no value is expected or returned—this removes the unused array
creation and makes the intent clear.
- Around line 588-594: The code currently mutates the message object by doing
truncatedMessages[index].content = content which also mutates the original
messages from aiState.get(); instead create an immutable update by copying the
messages array and the specific message object before changing content — e.g.,
build a newMessages array (from messages.slice or messages.map), replace the
entry at index with a shallow-copied message object that has content set to the
new value, then call aiState.update({...aiState.get(), messages: newMessages});
update references: truncatedMessages, messages, index, and
aiState.update/aiState.get().
- Around line 645-667: Wrap the body of deleteMessageAction in a try/catch, call
deleteTrailingMessages and deleteMessage inside the try, and only call
aiState.done to truncate messages after deleteMessage resolves successfully; in
the catch return a failure indicator (e.g., false) and optionally log or rethrow
the error, while on success return true. Specifically, update
deleteMessageAction to return a boolean (or result object), use
getMutableAIState/aiState.get as before but move aiState.done into the success
path after await deleteMessage(messageId), and handle errors from
deleteTrailingMessages/deleteMessage to avoid updating client state when server
deletion fails.
- Around line 410-421: The parameter list for the function (aiState, uiStream,
isGenerating, isCollapsed, mapProvider, useSpecificAPI) currently uses `any` and
should be replaced with concrete types from the ai/rsc module and local types:
import and use the appropriate types (e.g., AiState, UIStream, MapProvider,
UseSpecificAPI flag or boolean) and replace `any` with those types; set
isGenerating/isCollapsed/skipTaskManager to boolean,
groupeId/currentSystemPrompt to string, messages to CoreMessage[] (already
present), and maxMessages to number; update the function signature in
app/actions.tsx to reference these imported types so the compiler enforces
correct shapes and remove all `any` usages for these parameters.
In `@components/user-message.tsx`:
- Around line 82-92: handleDelete currently mutates UI optimistically before
calling deleteMessageAction, causing UI/server inconsistency on failure; change
it to capture the prior messages (e.g., const prev = currentMessages snapshot),
then perform deleteMessageAction inside a try/catch and only update setMessages
on success, or if you prefer optimistic update keep the UI change but wrap
deleteMessageAction in try/catch and on failure call setMessages(prev) to roll
back and log/show the error; reference handleDelete, deleteMessageAction and
setMessages when making this change.
- Around line 67-80: handleSave currently truncates messages then awaits
resubmit(id, editContent, mapProvider) without error handling, which can leave
the UI inconsistent if resubmit fails or returns null; modify handleSave to (1)
capture and keep a copy of the original messages before calling
setMessages(truncate) so you can rollback on error, (2) introduce a loading flag
(e.g., setIsSaving) and set it true before the async call and false in finally,
(3) wrap the resubmit call in try/catch, and if the response is null or an
exception occurs restore the original messages and reset editing state (use
setMessages with the saved original and setIsEditing(true) or show an error),
and (4) only append the response to messages with
setMessages([...currentMessages, response]) when response is valid; reference
handleSave, resubmit, setMessages, setIsEditing, editContent, id, and
mapProvider.
In `@lib/actions/chat-db.ts`:
- Around line 232-247: The updateMessage function currently allows updates by
messageId only; add a userId parameter (e.g., updateMessage(messageId: string,
userId: string, updates: Partial<NewMessage>)) and include it in the WHERE
clause (e.g., where eq(messages.id, messageId) AND eq(messages.userId, userId))
so only the owner can update; ensure the function still returns the updated
Message or null when no row is returned and keep the existing error handling in
the catch block.
- Around line 254-265: The deleteMessage function currently deletes without
authorization; change its signature to accept a userId parameter and enforce
ownership by including an additional WHERE clause that matches messages.userId
(or the appropriate owner column) to the passed userId before deleting (similar
to updateMessage). Ensure the deletion query only runs when both eq(messages.id,
messageId) and eq(messages.userId, userId) are applied, and return false (or
handle) if no rows are affected; also update any callers to pass the userId.
- Around line 272-283: deleteTrailingMessages currently deletes without checking
that the requester owns the chat and it swallows errors; update
deleteTrailingMessages to accept a userId (or callerId) parameter, first query
the chats table (e.g., using the chats model / getChatById logic) to confirm the
chat's ownerId matches userId and throw an AuthorizationError if not, then
perform the delete on messages only if authorized; also remove console.error
swallowing and propagate errors (rethrow or return a failure result) so callers
can handle deletion failures instead of silently returning void. Ensure you
reference the existing deleteTrailingMessages function and the messages/chats
models when making these changes.
In `@lib/actions/chat.ts`:
- Around line 168-192: Add server-side authorization to the wrapper functions by
calling getCurrentUserIdOnServer() at the start of updateMessage, deleteMessage,
and deleteTrailingMessages and verifying the current user is the owner of the
target resource before calling dbUpdateMessage, dbDeleteMessage, or
dbDeleteTrailingMessages; perform a minimal lookup (e.g., fetch message/chat
owner via a helper like dbGetMessageById or dbGetChatById) and if the owner !==
currentUserId return null/false/void (or throw an authorization error) instead
of performing the DB operation. Ensure the ownership check runs in updateMessage
(using messageId), deleteMessage (using messageId), and deleteTrailingMessages
(using chatId) so these wrappers enforce access control regardless of callers.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
server.logis excluded by!**/*.log
📒 Files selected for processing (4)
app/actions.tsxcomponents/user-message.tsxlib/actions/chat-db.tslib/actions/chat.ts
🧰 Additional context used
🧬 Code graph analysis (2)
lib/actions/chat-db.ts (3)
lib/db/schema.ts (3)
messages(26-37)chat(69-78)chats(43-43)lib/actions/chat.ts (3)
updateMessage(168-175)deleteMessage(177-184)deleteTrailingMessages(186-192)lib/db/index.ts (1)
db(25-25)
lib/actions/chat.ts (1)
lib/actions/chat-db.ts (3)
updateMessage(232-247)deleteMessage(254-265)deleteTrailingMessages(272-283)
🪛 Biome (2.3.13)
app/actions.tsx
[error] 473-473: This callback passed to map() iterable method should always return a value.
Add a return with a value to this callback.
(lint/suspicious/useIterableCallbackReturn)
🔇 Additional comments (7)
lib/actions/chat.ts (1)
13-15: LGTM!The imports are correctly added for the new database functions.
components/user-message.tsx (2)
1-22: LGTM!The
'use client'directive is correctly added, and the imports are appropriate for the new editing/deleting functionality with proper UI components.
108-161: LGTM!The editing and deletion UI is well-implemented:
- Hover-to-reveal pattern for non-intrusive action buttons
- Proper confirmation dialog for the destructive delete action with clear warning about subsequent messages
- Conditional rendering based on
idpresence ensures actions are only available when the message can be identifiedapp/actions.tsx (3)
19-25: LGTM!The imports for the new chat management functions are correctly added.
700-705: LGTM!The new actions
resubmitanddeleteMessageActionare correctly exported alongside existing actions.
816-822: LGTM!The
idprop is correctly passed toUserMessage, enabling the edit/delete functionality in the UI.lib/actions/chat-db.ts (1)
122-132: The upsert logic is correct. The schema confirms thatcreatedAtis defined astimestamp('created_at', ...)at line 32 of lib/db/schema.ts, sosql\EXCLUDED.created_at`` properly references the actual PostgreSQL column name.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
app/actions.tsx
Outdated
| if (toolOutputs.length > 0) { | ||
| toolOutputs.map(output => { | ||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: groupeId, | ||
| role: 'tool', | ||
| content: JSON.stringify(output.result), | ||
| name: output.toolName, | ||
| type: 'tool' | ||
| } | ||
| ] | ||
| }) | ||
| } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use forEach instead of map for side effects.
The map() callback doesn't return a value, which is a code smell. Since this loop is used purely for side effects (updating aiState), use forEach() instead.
🔧 Proposed fix
if (toolOutputs.length > 0) {
- toolOutputs.map(output => {
+ toolOutputs.forEach(output => {
aiState.update({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: groupeId,
role: 'tool',
content: JSON.stringify(output.result),
name: output.toolName,
type: 'tool'
}
]
})
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (toolOutputs.length > 0) { | |
| toolOutputs.map(output => { | |
| aiState.update({ | |
| ...aiState.get(), | |
| messages: [ | |
| ...aiState.get().messages, | |
| { | |
| id: groupeId, | |
| role: 'tool', | |
| content: JSON.stringify(output.result), | |
| name: output.toolName, | |
| type: 'tool' | |
| } | |
| ] | |
| }) | |
| } | |
| }) | |
| } | |
| if (toolOutputs.length > 0) { | |
| toolOutputs.forEach(output => { | |
| aiState.update({ | |
| ...aiState.get(), | |
| messages: [ | |
| ...aiState.get().messages, | |
| { | |
| id: groupeId, | |
| role: 'tool', | |
| content: JSON.stringify(output.result), | |
| name: output.toolName, | |
| type: 'tool' | |
| } | |
| ] | |
| }) | |
| }) | |
| } |
🧰 Tools
🪛 Biome (2.3.13)
[error] 473-473: This callback passed to map() iterable method should always return a value.
Add a return with a value to this callback.
(lint/suspicious/useIterableCallbackReturn)
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 472 - 488, The loop over toolOutputs is using
map for side effects which is incorrect; replace the map call with forEach on
toolOutputs and keep the body that calls aiState.update (referencing
aiState.update, toolOutputs, and groupeId) so no value is expected or
returned—this removes the unused array creation and makes the intent clear.
| async function deleteMessageAction(messageId: string) { | ||
| 'use server' | ||
|
|
||
| const aiState = getMutableAIState<typeof AI>() | ||
| const messages = aiState.get().messages | ||
| const index = messages.findIndex(m => m.id === messageId) | ||
|
|
||
| if (index !== -1) { | ||
| const messageToDelete = messages[index] | ||
| const chatId = aiState.get().chatId | ||
|
|
||
| if (messageToDelete.createdAt) { | ||
| await deleteTrailingMessages(chatId, new Date(messageToDelete.createdAt)) | ||
| } | ||
| await deleteMessage(messageId) | ||
|
|
||
| const truncatedMessages = messages.slice(0, index) | ||
| aiState.done({ | ||
| ...aiState.get(), | ||
| messages: truncatedMessages | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add return value and error handling.
The function doesn't return a success/failure indicator, making it impossible for the client to handle errors properly. If deleteMessage() fails, the AI state is still updated, leading to server/client inconsistency.
🛡️ Proposed fix
-async function deleteMessageAction(messageId: string) {
+async function deleteMessageAction(messageId: string): Promise<{ success: boolean }> {
'use server'
const aiState = getMutableAIState<typeof AI>()
const messages = aiState.get().messages
const index = messages.findIndex(m => m.id === messageId)
if (index !== -1) {
const messageToDelete = messages[index]
const chatId = aiState.get().chatId
if (messageToDelete.createdAt) {
await deleteTrailingMessages(chatId, new Date(messageToDelete.createdAt))
}
- await deleteMessage(messageId)
+ const deleted = await deleteMessage(messageId)
+ if (!deleted) {
+ return { success: false }
+ }
const truncatedMessages = messages.slice(0, index)
aiState.done({
...aiState.get(),
messages: truncatedMessages
})
+ return { success: true }
}
+ return { success: false }
}🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 645 - 667, Wrap the body of deleteMessageAction
in a try/catch, call deleteTrailingMessages and deleteMessage inside the try,
and only call aiState.done to truncate messages after deleteMessage resolves
successfully; in the catch return a failure indicator (e.g., false) and
optionally log or rethrow the error, while on success return true. Specifically,
update deleteMessageAction to return a boolean (or result object), use
getMutableAIState/aiState.get as before but move aiState.done into the success
path after await deleteMessage(messageId), and handle errors from
deleteTrailingMessages/deleteMessage to avoid updating client state when server
deletion fails.
| const handleSave = async () => { | ||
| if (!id || !editContent.trim()) return | ||
|
|
||
| setIsEditing(false) | ||
|
|
||
| // Truncate UI state | ||
| setMessages(currentMessages => { | ||
| const index = currentMessages.findIndex(m => m.id === id) | ||
| return currentMessages.slice(0, index + 1) | ||
| }) | ||
|
|
||
| const response = await resubmit(id, editContent, mapProvider) | ||
| setMessages(currentMessages => [...currentMessages, response]) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for failed resubmit.
If resubmit() fails or returns a null component, the UI will be in an inconsistent state: trailing messages are already removed from UI state, but no new response is appended. Consider:
- Storing the original messages before truncation for rollback
- Handling the case where
responseis null/invalid - Adding a loading state during the async operation
🛡️ Proposed fix with error handling
const handleSave = async () => {
if (!id || !editContent.trim()) return
setIsEditing(false)
+ // Store original state for potential rollback
+ let originalMessages: typeof currentMessages;
+ setMessages(currentMessages => {
+ originalMessages = currentMessages;
+ const index = currentMessages.findIndex(m => m.id === id)
+ return currentMessages.slice(0, index + 1)
+ })
- // Truncate UI state
- setMessages(currentMessages => {
- const index = currentMessages.findIndex(m => m.id === id)
- return currentMessages.slice(0, index + 1)
- })
- const response = await resubmit(id, editContent, mapProvider)
- setMessages(currentMessages => [...currentMessages, response])
+ try {
+ const response = await resubmit(id, editContent, mapProvider)
+ if (response?.component) {
+ setMessages(currentMessages => [...currentMessages, response])
+ } else {
+ // Rollback on failure
+ setMessages(originalMessages!)
+ }
+ } catch (error) {
+ console.error('Failed to resubmit message:', error)
+ setMessages(originalMessages!)
+ }
}🤖 Prompt for AI Agents
In `@components/user-message.tsx` around lines 67 - 80, handleSave currently
truncates messages then awaits resubmit(id, editContent, mapProvider) without
error handling, which can leave the UI inconsistent if resubmit fails or returns
null; modify handleSave to (1) capture and keep a copy of the original messages
before calling setMessages(truncate) so you can rollback on error, (2) introduce
a loading flag (e.g., setIsSaving) and set it true before the async call and
false in finally, (3) wrap the resubmit call in try/catch, and if the response
is null or an exception occurs restore the original messages and reset editing
state (use setMessages with the saved original and setIsEditing(true) or show an
error), and (4) only append the response to messages with
setMessages([...currentMessages, response]) when response is valid; reference
handleSave, resubmit, setMessages, setIsEditing, editContent, id, and
mapProvider.
| const handleDelete = async () => { | ||
| if (!id) return | ||
|
|
||
| // Truncate UI state | ||
| setMessages(currentMessages => { | ||
| const index = currentMessages.findIndex(m => m.id === id) | ||
| return currentMessages.slice(0, index) | ||
| }) | ||
|
|
||
| await deleteMessageAction(id) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for failed delete.
Similar to handleSave, if deleteMessageAction() fails, the UI state will be inconsistent with the server state. The messages are removed from UI before the server confirms deletion.
🛡️ Proposed fix
const handleDelete = async () => {
if (!id) return
+ let originalMessages: typeof currentMessages;
+ setMessages(currentMessages => {
+ originalMessages = currentMessages;
+ const index = currentMessages.findIndex(m => m.id === id)
+ return currentMessages.slice(0, index)
+ })
- // Truncate UI state
- setMessages(currentMessages => {
- const index = currentMessages.findIndex(m => m.id === id)
- return currentMessages.slice(0, index)
- })
- await deleteMessageAction(id)
+ try {
+ await deleteMessageAction(id)
+ } catch (error) {
+ console.error('Failed to delete message:', error)
+ setMessages(originalMessages!)
+ }
}🤖 Prompt for AI Agents
In `@components/user-message.tsx` around lines 82 - 92, handleDelete currently
mutates UI optimistically before calling deleteMessageAction, causing UI/server
inconsistency on failure; change it to capture the prior messages (e.g., const
prev = currentMessages snapshot), then perform deleteMessageAction inside a
try/catch and only update setMessages on success, or if you prefer optimistic
update keep the UI change but wrap deleteMessageAction in try/catch and on
failure call setMessages(prev) to roll back and log/show the error; reference
handleDelete, deleteMessageAction and setMessages when making this change.
| export async function updateMessage( | ||
| messageId: string, | ||
| updates: Partial<NewMessage> | ||
| ): Promise<Message | null> { | ||
| try { | ||
| const result = await db | ||
| .update(messages) | ||
| .set(updates) | ||
| .where(eq(messages.id, messageId)) | ||
| .returning(); | ||
| return result[0] || null; | ||
| } catch (error) { | ||
| console.error('Error updating message:', error); | ||
| return null; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing authorization check before updating message.
The updateMessage function modifies a message by ID without verifying that the requesting user owns the message. This could allow unauthorized modifications if the messageId is guessable or leaked.
Consider adding a userId parameter and including it in the WHERE clause for defense in depth:
🛡️ Proposed fix to add authorization
export async function updateMessage(
messageId: string,
- updates: Partial<NewMessage>
+ updates: Partial<NewMessage>,
+ userId?: string
): Promise<Message | null> {
try {
+ const whereClause = userId
+ ? and(eq(messages.id, messageId), eq(messages.userId, userId))
+ : eq(messages.id, messageId);
const result = await db
.update(messages)
.set(updates)
- .where(eq(messages.id, messageId))
+ .where(whereClause)
.returning();
return result[0] || null;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function updateMessage( | |
| messageId: string, | |
| updates: Partial<NewMessage> | |
| ): Promise<Message | null> { | |
| try { | |
| const result = await db | |
| .update(messages) | |
| .set(updates) | |
| .where(eq(messages.id, messageId)) | |
| .returning(); | |
| return result[0] || null; | |
| } catch (error) { | |
| console.error('Error updating message:', error); | |
| return null; | |
| } | |
| } | |
| export async function updateMessage( | |
| messageId: string, | |
| updates: Partial<NewMessage>, | |
| userId?: string | |
| ): Promise<Message | null> { | |
| try { | |
| const whereClause = userId | |
| ? and(eq(messages.id, messageId), eq(messages.userId, userId)) | |
| : eq(messages.id, messageId); | |
| const result = await db | |
| .update(messages) | |
| .set(updates) | |
| .where(whereClause) | |
| .returning(); | |
| return result[0] || null; | |
| } catch (error) { | |
| console.error('Error updating message:', error); | |
| return null; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@lib/actions/chat-db.ts` around lines 232 - 247, The updateMessage function
currently allows updates by messageId only; add a userId parameter (e.g.,
updateMessage(messageId: string, userId: string, updates: Partial<NewMessage>))
and include it in the WHERE clause (e.g., where eq(messages.id, messageId) AND
eq(messages.userId, userId)) so only the owner can update; ensure the function
still returns the updated Message or null when no row is returned and keep the
existing error handling in the catch block.
| export async function deleteMessage(messageId: string): Promise<boolean> { | ||
| try { | ||
| const result = await db | ||
| .delete(messages) | ||
| .where(eq(messages.id, messageId)) | ||
| .returning({ id: messages.id }); | ||
| return result.length > 0; | ||
| } catch (error) { | ||
| console.error('Error deleting message:', error); | ||
| return false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing authorization check before deleting message.
Similar to updateMessage, this function deletes a message without verifying ownership. Add a userId parameter for authorization.
🛡️ Proposed fix to add authorization
-export async function deleteMessage(messageId: string): Promise<boolean> {
+export async function deleteMessage(messageId: string, userId?: string): Promise<boolean> {
try {
+ const whereClause = userId
+ ? and(eq(messages.id, messageId), eq(messages.userId, userId))
+ : eq(messages.id, messageId);
const result = await db
.delete(messages)
- .where(eq(messages.id, messageId))
+ .where(whereClause)
.returning({ id: messages.id });
return result.length > 0;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function deleteMessage(messageId: string): Promise<boolean> { | |
| try { | |
| const result = await db | |
| .delete(messages) | |
| .where(eq(messages.id, messageId)) | |
| .returning({ id: messages.id }); | |
| return result.length > 0; | |
| } catch (error) { | |
| console.error('Error deleting message:', error); | |
| return false; | |
| } | |
| } | |
| export async function deleteMessage(messageId: string, userId?: string): Promise<boolean> { | |
| try { | |
| const whereClause = userId | |
| ? and(eq(messages.id, messageId), eq(messages.userId, userId)) | |
| : eq(messages.id, messageId); | |
| const result = await db | |
| .delete(messages) | |
| .where(whereClause) | |
| .returning({ id: messages.id }); | |
| return result.length > 0; | |
| } catch (error) { | |
| console.error('Error deleting message:', error); | |
| return false; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@lib/actions/chat-db.ts` around lines 254 - 265, The deleteMessage function
currently deletes without authorization; change its signature to accept a userId
parameter and enforce ownership by including an additional WHERE clause that
matches messages.userId (or the appropriate owner column) to the passed userId
before deleting (similar to updateMessage). Ensure the deletion query only runs
when both eq(messages.id, messageId) and eq(messages.userId, userId) are
applied, and return false (or handle) if no rows are affected; also update any
callers to pass the userId.
| export async function deleteTrailingMessages( | ||
| chatId: string, | ||
| createdAt: Date | ||
| ): Promise<void> { | ||
| try { | ||
| await db | ||
| .delete(messages) | ||
| .where(and(eq(messages.chatId, chatId), gt(messages.createdAt, createdAt))); | ||
| } catch (error) { | ||
| console.error('Error deleting trailing messages:', error); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing authorization and silent error handling.
- No verification that the user owns the chat before deleting messages.
- Errors are logged but silently swallowed (function returns
void), making it difficult for callers to know if deletion succeeded.
🛡️ Proposed fix
export async function deleteTrailingMessages(
chatId: string,
- createdAt: Date
-): Promise<void> {
+ createdAt: Date,
+ userId?: string
+): Promise<boolean> {
try {
+ // Optionally verify chat ownership first if userId provided
+ if (userId) {
+ const chat = await db.select({ id: chats.id }).from(chats)
+ .where(and(eq(chats.id, chatId), eq(chats.userId, userId))).limit(1);
+ if (!chat.length) return false;
+ }
await db
.delete(messages)
.where(and(eq(messages.chatId, chatId), gt(messages.createdAt, createdAt)));
+ return true;
} catch (error) {
console.error('Error deleting trailing messages:', error);
+ return false;
}
}🤖 Prompt for AI Agents
In `@lib/actions/chat-db.ts` around lines 272 - 283, deleteTrailingMessages
currently deletes without checking that the requester owns the chat and it
swallows errors; update deleteTrailingMessages to accept a userId (or callerId)
parameter, first query the chats table (e.g., using the chats model /
getChatById logic) to confirm the chat's ownerId matches userId and throw an
AuthorizationError if not, then perform the delete on messages only if
authorized; also remove console.error swallowing and propagate errors (rethrow
or return a failure result) so callers can handle deletion failures instead of
silently returning void. Ensure you reference the existing
deleteTrailingMessages function and the messages/chats models when making these
changes.
| export async function updateMessage(messageId: string, content: string): Promise<DrizzleMessage | null> { | ||
| try { | ||
| return await dbUpdateMessage(messageId, { content }); | ||
| } catch (error) { | ||
| console.error(`Error updating message ${messageId}:`, error); | ||
| return null; | ||
| } | ||
| } | ||
|
|
||
| export async function deleteMessage(messageId: string): Promise<boolean> { | ||
| try { | ||
| return await dbDeleteMessage(messageId); | ||
| } catch (error) { | ||
| console.error(`Error deleting message ${messageId}:`, error); | ||
| return false; | ||
| } | ||
| } | ||
|
|
||
| export async function deleteTrailingMessages(chatId: string, createdAt: Date): Promise<void> { | ||
| try { | ||
| await dbDeleteTrailingMessages(chatId, createdAt); | ||
| } catch (error) { | ||
| console.error(`Error deleting trailing messages for chat ${chatId}:`, error); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider adding authorization at the wrapper layer.
These wrappers are an ideal place to add user authorization since this is a 'use server' file with access to getCurrentUserIdOnServer(). Currently, authorization depends entirely on the calling code (app/actions.tsx) operating on user-scoped AI state.
The implementation is functionally correct, but adding authorization here would provide defense in depth.
♻️ Example authorization pattern
export async function updateMessage(messageId: string, content: string): Promise<DrizzleMessage | null> {
try {
+ const userId = await getCurrentUserIdOnServer();
+ if (!userId) {
+ console.error('updateMessage: User not authenticated');
+ return null;
+ }
- return await dbUpdateMessage(messageId, { content });
+ return await dbUpdateMessage(messageId, { content }, userId);
} catch (error) {🤖 Prompt for AI Agents
In `@lib/actions/chat.ts` around lines 168 - 192, Add server-side authorization to
the wrapper functions by calling getCurrentUserIdOnServer() at the start of
updateMessage, deleteMessage, and deleteTrailingMessages and verifying the
current user is the owner of the target resource before calling dbUpdateMessage,
dbDeleteMessage, or dbDeleteTrailingMessages; perform a minimal lookup (e.g.,
fetch message/chat owner via a helper like dbGetMessageById or dbGetChatById)
and if the owner !== currentUserId return null/false/void (or throw an
authorization error) instead of performing the DB operation. Ensure the
ownership check runs in updateMessage (using messageId), deleteMessage (using
messageId), and deleteTrailingMessages (using chatId) so these wrappers enforce
access control regardless of callers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Key issues to address: (1) submit()/resubmit() don’t await processChatWorkflow, which risks race conditions and swallowed errors; (2) the new message mutation endpoints lack server-side authorization/ownership checks; (3) saveChat() upsert updates createdAt, which can break your trailing-deletion logic based on timestamps. There are also smaller maintainability concerns around pervasive any, in-place mutation in resubmit, and UI truncation when findIndex returns -1.
Additional notes (2)
- Readability |
app/actions.tsx:472-487
toolOutputs.map(...)is being used for side-effects (no returned array is used). This is a small correctness/clarity footgun: readers may assume the result is used, and it can hide accidental returns.
Also, this code calls aiState.get() repeatedly inside the loop; if aiState.get() is expensive or if updates are concurrent, it can lead to subtle ordering issues.
- Maintainability |
lib/actions/chat-db.ts:119-132
saveChat()now doesonConflictDoUpdateand setscreatedAtfrom the excluded row. This effectively allows a re-save to rewrite a message's creation time, which can break chronological ordering and any logic relying oncreatedAtfor trailing deletion (your new feature does exactly that).
Given your trailing deletion is based on createdAt, updating createdAt during upsert can cause incorrect deletion ranges.
Summary of changes
Summary
This PR adds message edit + per-message delete capabilities across the stack.
Server actions / AI workflow (app/actions.tsx)
- Refactors the generation flow into a shared
processChatWorkflow()used by bothsubmit()and newresubmit(). - Adds server actions:
resubmit(messageId, content, mapProvider)to update a message, delete subsequent messages, and regenerate an AI response.deleteMessageAction(messageId)to delete a message and all subsequent messages.
- Replaces hardcoded user id with
getCurrentUserIdOnServer()fallbacking to'anonymous'. - Passes
idinto<UserMessage />so client UI can trigger edits/deletes.
UI (components/user-message.tsx)
- Adds
'use client'and introduces edit/delete controls (hover-revealed). - Implements inline editing via
react-textarea-autosizeand deletion confirmation viaAlertDialog. - Truncates UI state before calling server actions to align UI with regenerated history.
DB / actions (lib/actions/chat-db.ts, lib/actions/chat.ts)
- Updates
saveChat()to useonConflictDoUpdateonmessages.idto avoid PK conflicts. - Adds granular message operations:
updateMessage()deleteMessage()deleteTrailingMessages(chatId, createdAt)
- Exposes these through
lib/actions/chat.ts.
Cleanup
- Deletes
server.logfrom the repo.
| processChatWorkflow({ | ||
| aiState, | ||
| uiStream, | ||
| isGenerating, | ||
| isCollapsed, | ||
| messages, | ||
| groupeId, | ||
| currentSystemPrompt, | ||
| mapProvider, | ||
| useSpecificAPI, | ||
| maxMessages, | ||
| skipTaskManager: skip | ||
| }) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
submit() now calls processChatWorkflow(...) but does not await it. That means submit() returns immediately while the workflow continues mutating aiState/uiStream in the background. This is risky:
- errors inside
processChatWorkflowwon’t be surfaced to the caller - the returned
uiStream.valuemay be observed before the firstuiStream.update(<Spinner />)happens (racey initial render) - execution ordering differs from
resubmit()where you likely also want deterministic workflow start
If the intent is fire-and-forget, it should be explicit and guarded; otherwise, awaiting is the safer default.
Suggestion
Change the call sites to await processChatWorkflow(...) (both in submit() and resubmit()), and wrap the workflow in a try/finally to ensure isGenerating.done(false)/uiStream.done() always run even on failure.
await processChatWorkflow({ /* ... */ })Optionally, return early after an inquire to keep submit() consistent.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| async function processChatWorkflow({ | ||
| aiState, | ||
| uiStream, | ||
| isGenerating, | ||
| isCollapsed, | ||
| messages, | ||
| groupeId, | ||
| currentSystemPrompt, | ||
| mapProvider, | ||
| useSpecificAPI, | ||
| maxMessages, | ||
| skipTaskManager = false | ||
| }: { | ||
| aiState: any | ||
| uiStream: any | ||
| isGenerating: any | ||
| isCollapsed: any | ||
| messages: CoreMessage[] | ||
| groupeId: string | ||
| currentSystemPrompt: string | ||
| mapProvider: any | ||
| useSpecificAPI: boolean | ||
| maxMessages: number | ||
| skipTaskManager?: boolean | ||
| }) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
processChatWorkflow uses multiple any-typed parameters (aiState, uiStream, isGenerating, isCollapsed, mapProvider) and also uses let action: any. This is type-valid but unsafe: it makes it easy to call the workflow incorrectly and silently break streaming/done semantics.
Given this function is now the core shared path for chat generation, it should have a real parameter type (even a minimal interface) to prevent accidental misuse.
Suggestion
Introduce lightweight types/interfaces for the streaming primitives and the workflow params instead of any, e.g.
type StreamableUI = { update(node: React.ReactNode): void; append(node: React.ReactNode): void; done(): void; value: React.ReactNode }
type StreamableValue<T> = { done(v?: T): void; value: T }
type WorkflowParams = {
aiState: { get(): AIState; update(next: AIState): void; done(next: AIState): void }
uiStream: StreamableUI
isGenerating: StreamableValue<boolean>
isCollapsed: StreamableValue<boolean>
messages: CoreMessage[]
// ...
}This keeps behavior the same but makes incorrect calls compile-time failures.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| const truncatedMessages = messages.slice(0, index + 1) | ||
| truncatedMessages[index].content = content | ||
|
|
||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: truncatedMessages | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resubmit() mutates state objects in-place (truncatedMessages[index].content = content). If aiState consumers assume immutability (common in React-ish state flows), in-place mutation can cause stale renders or unexpected shared references.
You already create a sliced array, but the message object within it is still shared with the original array entry.
Suggestion
Avoid in-place mutation by cloning the edited message object when building truncatedMessages.
const truncatedMessages = messages.slice(0, index + 1)
truncatedMessages[index] = { ...truncatedMessages[index], content }Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| async function deleteMessageAction(messageId: string) { | ||
| 'use server' | ||
|
|
||
| const aiState = getMutableAIState<typeof AI>() | ||
| const messages = aiState.get().messages | ||
| const index = messages.findIndex(m => m.id === messageId) | ||
|
|
||
| if (index !== -1) { | ||
| const messageToDelete = messages[index] | ||
| const chatId = aiState.get().chatId | ||
|
|
||
| if (messageToDelete.createdAt) { | ||
| await deleteTrailingMessages(chatId, new Date(messageToDelete.createdAt)) | ||
| } | ||
| await deleteMessage(messageId) | ||
|
|
||
| const truncatedMessages = messages.slice(0, index) | ||
| aiState.done({ | ||
| ...aiState.get(), | ||
| messages: truncatedMessages | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deleteMessageAction updates state via aiState.done(...) rather than aiState.update(...). If done finalizes the stream/session, this may prevent subsequent actions in the same session (or cause unexpected UI behavior), especially since deleting a message is not necessarily “end of conversation”.
This is a semantic difference from resubmit() where you use aiState.update(...) and keep streaming alive.
Suggestion
Prefer aiState.update(...) here unless the intended behavior is to finalize the chat session. If you do want to finalize, add a comment explaining why.
aiState.update({ ...aiState.get(), messages: truncatedMessages })Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| // Truncate UI state | ||
| setMessages(currentMessages => { | ||
| const index = currentMessages.findIndex(m => m.id === id) | ||
| return currentMessages.slice(0, index + 1) | ||
| }) | ||
|
|
||
| const response = await resubmit(id, editContent, mapProvider) | ||
| setMessages(currentMessages => [...currentMessages, response]) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Client-side truncation logic can behave unexpectedly when id is not found: findIndex returns -1, so slice(0, index + 1) becomes slice(0, 0) and wipes the UI.
This can happen if id is undefined (already guarded) or if the UI state diverges (e.g., hydration mismatch, duplicate IDs, or user clicks edit on a message not present in current UI state).
Suggestion
Guard against index === -1 before truncating.
setMessages(current => {
const index = current.findIndex(m => m.id === id)
return index === -1 ? current : current.slice(0, index + 1)
})Apply the same pattern in handleDelete.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| const handleSave = async () => { | ||
| if (!id || !editContent.trim()) return | ||
|
|
||
| setIsEditing(false) | ||
|
|
||
| // Truncate UI state | ||
| setMessages(currentMessages => { | ||
| const index = currentMessages.findIndex(m => m.id === id) | ||
| return currentMessages.slice(0, index + 1) | ||
| }) | ||
|
|
||
| const response = await resubmit(id, editContent, mapProvider) | ||
| setMessages(currentMessages => [...currentMessages, response]) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handleSave does not handle server-action failure. Right now you optimistically truncate UI state, then append whatever resubmit returns. If resubmit throws or returns { component: null }, the UI can end up missing the original messages with no recovery path.
Given resubmit can early-return component: null when message id is not found server-side, the client should handle that explicitly.
Suggestion
Wrap server calls in try/catch and restore UI state (or show an error) if resubmit fails / returns null component.
const prev = /* capture currentMessages */
try {
const response = await resubmit(id, editContent, mapProvider)
if (!response?.component) throw new Error('Resubmit failed')
setMessages(curr => [...curr, response])
} catch {
setMessages(prev)
setIsEditing(true)
}Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| export async function updateMessage( | ||
| messageId: string, | ||
| updates: Partial<NewMessage> | ||
| ): Promise<Message | null> { | ||
| try { | ||
| const result = await db | ||
| .update(messages) | ||
| .set(updates) | ||
| .where(eq(messages.id, messageId)) | ||
| .returning(); | ||
| return result[0] || null; | ||
| } catch (error) { | ||
| console.error('Error updating message:', error); | ||
| return null; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Deletes a specific message. | ||
| * @param messageId - The ID of the message to delete. | ||
| * @returns True if deletion was successful, false otherwise. | ||
| */ | ||
| export async function deleteMessage(messageId: string): Promise<boolean> { | ||
| try { | ||
| const result = await db | ||
| .delete(messages) | ||
| .where(eq(messages.id, messageId)) | ||
| .returning({ id: messages.id }); | ||
| return result.length > 0; | ||
| } catch (error) { | ||
| console.error('Error deleting message:', error); | ||
| return false; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Deletes all messages in a chat created after a certain message. | ||
| * @param chatId - The ID of the chat. | ||
| * @param createdAt - The timestamp after which messages should be deleted. | ||
| */ | ||
| export async function deleteTrailingMessages( | ||
| chatId: string, | ||
| createdAt: Date | ||
| ): Promise<void> { | ||
| try { | ||
| await db | ||
| .delete(messages) | ||
| .where(and(eq(messages.chatId, chatId), gt(messages.createdAt, createdAt))); | ||
| } catch (error) { | ||
| console.error('Error deleting trailing messages:', error); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updateMessage / deleteMessage / deleteTrailingMessages operate solely by message/chat identifiers with no user scoping/authorization checks. In this PR, client UI can call server actions with a message id; without server-side ownership enforcement, any authenticated (or even anonymous, depending on your auth model) user could potentially target another user's message if they can guess/obtain IDs.
You already import getCurrentUserIdOnServer in this file but do not use it for these new functions.
Suggestion
Enforce ownership in the DB layer (or at least in the server actions) by including userId in the WHERE clause.
Example (DB layer):
const userId = await getCurrentUserIdOnServer()
if (!userId) throw new Error('Not authenticated')
await db
.update(messages)
.set(updates)
.where(and(eq(messages.id, messageId), eq(messages.userId, userId)))Similarly for deletes and trailing deletes: restrict by messages.userId (and verify the chat belongs to the user).
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
|
@coderabbitai I don't see the UI functions for the message edit on chat |
|
@coderabbitai can you get me all the prompt for AI agents on one response |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 5307 🏁 Script executed: Length of output: 349
How to Access Edit/Delete FunctionsThe Edit and Delete buttons appear when you hover over a user message. They use a hover effect ( What to Look For:
Editing Flow:
Deleting Flow:
VerificationThe implementation in
Try hovering over one of your messages in the chat to see if the icons appear. If they still don't show up, there might be a CSS or styling issue we need to investigate further. |
|
@coderabbitai the resolution search does not work anymore |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 604 🏁 Script executed: Length of output: 20010 🏁 Script executed: Length of output: 5427 🏁 Script executed: Length of output: 5738
1. Task Manager AgentFile: 2. Inquire AgentFile: Note: The system prompt is currently a placeholder: 3. Researcher Agent (Main Research Agent)File: 4. Writer AgentFile: 5. Resolution Search Agent (Satellite Image Analysis)File: 6. Query Suggestor Agent (Related Queries)File: 7. Suggest Actions Agent (Map-Based Suggestions)File: Additional Notes:
|
The UI functions for message editing have been integrated into the |
Here are the system prompts for the AI agents as found in the codebase: 1. Researcher Agent (
|
- Implement `updateMessage`, `deleteMessage`, and `deleteTrailingMessages` in database layer. - Refactor AI generation logic in `app/actions.tsx` into a reusable `processChatWorkflow` function. - Add `resubmit` and `deleteMessageAction` server actions. - Enhance `UserMessage` component with inline editing and deletion UI. - Ensure proper authentication using `getCurrentUserIdOnServer` and state management for trailing message deletion. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 55 🏁 Script executed: Length of output: 7839 🏁 Script executed: Length of output: 4694 🏁 Script executed: Length of output: 6796 🏁 Script executed: Length of output: 1875 🏁 Script executed: Length of output: 3485 🏁 Script executed: Length of output: 794 🏁 Script executed: Length of output: 374 🏁 Script executed: Length of output: 2585 🏁 Script executed: Length of output: 968 🏁 Script executed: Length of output: 3386 🏁 Script executed: Length of output: 854
The ProblemIn export async function getModel(requireVision: boolean = false) {
// ... the function never checks or uses requireVision!
// It just returns whatever model is configured
}Meanwhile, return streamObject({
model: await getModel(hasImage), // Passes true when image exists
// ...
})Since the The FixThe 🏁 Script executed: Length of output: 3333
The ProblemIn The FixUpdate the export async function getModel(requireVision: boolean = false) {
const selectedModel = await getSelectedModel();
const xaiApiKey = process.env.XAI_API_KEY;
const gemini3ProApiKey = process.env.GEMINI_3_PRO_API_KEY;
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID;
const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY;
const awsRegion = process.env.AWS_REGION;
const bedrockModelId = process.env.BEDROCK_MODEL_ID || 'anthropic.claude-3-5-sonnet-20241022-v2:0';
const openaiApiKey = process.env.OPENAI_API_KEY;
if (selectedModel) {
switch (selectedModel) {
case 'Grok 4.2':
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
});
try {
// Use vision-capable model when required
const modelName = requireVision ? 'grok-vision-beta' : 'grok-4-fast-non-reasoning';
return xai(modelName);
} catch (error) {
console.error('Selected model "Grok 4.2" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Grok 4.2" but XAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'Gemini 3':
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
try {
// Gemini 3 Pro supports vision by default
return google('gemini-3-pro-preview');
} catch (error) {
console.error('Selected model "Gemini 3" is configured but failed to initialize.', error);
throw new Error('Failed to initialize selected model.');
}
} else {
console.error('User selected "Gemini 3" but GEMINI_3_PRO_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
case 'GPT-5.1':
if (openaiApiKey) {
const openai = createOpenAI({
apiKey: openaiApiKey,
});
// GPT-4o supports vision by default
return openai('gpt-4o');
} else {
console.error('User selected "GPT-5.1" but OPENAI_API_KEY is not set.');
throw new Error('Selected model is not configured.');
}
}
}
// Default behavior with vision support
if (requireVision) {
// When vision is required, prefer models that explicitly support it
// Try Gemini first for vision (excellent multimodal support)
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
try {
return google('gemini-3-pro-preview');
} catch (error) {
console.warn('Gemini 3 Pro API unavailable for vision, falling back:', error);
}
}
// Try GPT-4o (strong vision capabilities)
if (openaiApiKey) {
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');
}
// Try Claude via Bedrock (good vision support)
if (awsAccessKeyId && awsSecretAccessKey) {
const bedrock = createAmazonBedrock({
bedrockOptions: {
region: awsRegion,
credentials: {
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
},
},
});
const model = bedrock(bedrockModelId, {
additionalModelRequestFields: { top_k: 350 },
});
return model;
}
throw new Error('No vision-capable model is configured. Please set GEMINI_3_PRO_API_KEY, OPENAI_API_KEY, or AWS credentials.');
}
// Default behavior for text-only: Grok -> Gemini -> Bedrock -> OpenAI
if (xaiApiKey) {
const xai = createXai({
apiKey: xaiApiKey,
baseURL: 'https://api.x.ai/v1',
});
try {
return xai('grok-4-fast-non-reasoning');
} catch (error) {
console.warn('xAI API unavailable, falling back to next provider:');
}
}
if (gemini3ProApiKey) {
const google = createGoogleGenerativeAI({
apiKey: gemini3ProApiKey,
});
try {
return google('gemini-3-pro-preview');
} catch (error) {
console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error);
}
}
if (awsAccessKeyId && awsSecretAccessKey) {
const bedrock = createAmazonBedrock({
bedrockOptions: {
region: awsRegion,
credentials: {
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
},
},
});
const model = bedrock(bedrockModelId, {
additionalModelRequestFields: { top_k: 350 },
});
return model;
}
const openai = createOpenAI({
apiKey: openaiApiKey,
});
return openai('gpt-4o');
}Key Changes
TestingAfter applying this fix:
Would you like me to create an issue to track this fix, or would you prefer to apply it directly in this PR? |
- Populate `createdAt` for all new messages in `AIState` to fix trailing deletion in active sessions. - Preserve multimodal content (images) during message editing in `resubmit`. - Update `UserMessage` layout to match the provided screenshot: - Move icons (Edit, Copy, Delete) below the message bubble. - Use `Pencil` icon and add `Copy` icon with clipboard functionality. - Add bubble-like styling to user messages. - Fix Resolution Search by ensuring vision-capable models are used in `getModel`. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/utils/index.ts (1)
101-115:⚠️ Potential issue | 🟡 MinorBedrock fallback ignores
requireVisionparameter.When AWS Bedrock is used as a fallback, the
requireVisionparameter is not checked. If the caller requires vision capabilities and the configuredbedrockModelIddoesn't support vision, this could cause runtime failures for resolution search or other vision-dependent features.Consider either:
- Adding a vision-capable model check/fallback for Bedrock
- Skipping Bedrock in the fallback chain when
requireVisionis true and continuing to OpenAI🛡️ Proposed fix to skip Bedrock when vision is required
if (awsAccessKeyId && awsSecretAccessKey) { + // Skip Bedrock fallback if vision is required, as vision support depends on model configuration + if (requireVision) { + console.warn('Vision required but Bedrock vision capability is not guaranteed, falling back to OpenAI'); + } else { const bedrock = createAmazonBedrock({ bedrockOptions: { region: awsRegion, credentials: { accessKeyId: awsAccessKeyId, secretAccessKey: awsSecretAccessKey, }, }, }); const model = bedrock(bedrockModelId, { additionalModelRequestFields: { top_k: 350 }, }); return model; + } }
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 605-617: The code is mutating editedMessageInState (and nested
textPart.text) which also mutates the original messages array; instead, perform
an immutable update by creating a shallow copy of the message object at
truncatedMessages[index] (e.g., newMessage = { ...editedMessageInState }) and if
content is an array, create a new content array with the modified text part (map
or slice+replace) or if content is a string, set newMessage.content = content;
then replace truncatedMessages[index] with newMessage before continuing so the
original messages/aiState are not mutated (referencing editedMessageInState,
truncatedMessages, and messages).
In `@components/user-message.tsx`:
- Around line 147-155: The action buttons are hidden via hover-only styles,
which blocks keyboard users; update the opacity classes so focus also reveals
them: where the buttons render (the block guarded by isEditing and id), add
focus-related utility classes alongside group-hover:opacity-100 (e.g., include
group-focus-within:opacity-100 and focus-within:opacity-100 on the container) so
tab/focus on the buttons (handleEdit, copyToClipboard) makes the controls
visible; ensure the container still has the "group" role and that the Button
elements remain native buttons (no tabindex tweaks needed).
In `@lib/utils/index.ts`:
- Line 55: Replace the older Gemini model id used for vision with the correct
Gemini 3 vision variant: wherever the code returns google(requireVision ?
'gemini-1.5-pro' : 'gemini-3-pro-preview') (i.e., the return that calls
google(...) with a ternary on requireVision), change the true branch to
'gemini-3-pro-image-preview' so the call becomes google(requireVision ?
'gemini-3-pro-image-preview' : 'gemini-3-pro-preview'); update both occurrences
of this pattern in the file.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
app/actions.tsxcomponents/user-message.tsxlib/utils/index.ts
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/utils/index.ts
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/utils/index.ts
🧬 Code graph analysis (2)
components/user-message.tsx (7)
app/actions.tsx (1)
AI(735-816)lib/store/settings.ts (1)
useSettingsStore(10-13)components/ui/textarea.tsx (1)
Textarea(24-24)components/ui/button.tsx (1)
Button(56-56)components/chat-share.tsx (1)
ChatShare(25-114)components/ui/alert-dialog.tsx (8)
AlertDialog(130-130)AlertDialogTrigger(133-133)AlertDialogContent(134-134)AlertDialogHeader(135-135)AlertDialogTitle(137-137)AlertDialogDescription(138-138)AlertDialogFooter(136-136)AlertDialogAction(139-139)components/sidebar/chat-history-client.tsx (1)
ChatHistoryClientProps(24-26)
app/actions.tsx (6)
lib/auth/get-current-user.ts (1)
getCurrentUserIdOnServer(112-127)lib/actions/chat.ts (4)
getSystemPrompt(268-283)deleteTrailingMessages(186-192)updateMessage(168-175)deleteMessage(177-184)lib/agents/task-manager.tsx (1)
taskManager(6-56)lib/db/schema.ts (1)
messages(26-37)lib/types/index.ts (1)
AIMessage(59-77)lib/agents/writer.tsx (1)
writer(7-51)
🪛 Biome (2.3.13)
app/actions.tsx
[error] 489-489: This callback passed to map() iterable method should always return a value.
Add a return with a value to this callback.
(lint/suspicious/useIterableCallbackReturn)
🔇 Additional comments (9)
app/actions.tsx (4)
19-25: LGTM!The new imports for
updateMessage,deleteMessage, anddeleteTrailingMessagesare correctly added to support the new message editing and deletion functionality.
383-386: LGTM!Good improvement using dynamic import of
getCurrentUserIdOnServerto properly authenticate users instead of hardcoding 'anonymous'. The fallback to 'anonymous' provides graceful degradation.
736-741: LGTM!The new
resubmitanddeleteMessageActionactions are correctly exported in the AI actions object, making them available to client components viauseActions.
851-857: LGTM!Correctly passing the message
idtoUserMessagecomponent enables per-message edit/delete functionality.components/user-message.tsx (4)
1-23: LGTM!The
'use client'directive is correctly added, and the imports foruseActions,useUIState, UI components, andAlertDialogare appropriate for the new edit/delete functionality.
29-34: LGTM!Adding the optional
idprop toUserMessagePropsis the correct approach for enabling per-message operations while maintaining backward compatibility.
117-135: LGTM!The inline editing UI with
Textarea, Save/Cancel buttons, and proper state management provides a good user experience. TheautoFocusattribute correctly focuses the textarea when editing begins.
157-180: LGTM!The
AlertDialogimplementation for delete confirmation is well done. The warning about deleting subsequent messages is appropriate and the destructive styling clearly indicates the action's severity.lib/utils/index.ts (1)
40-40: The model identifiersgrok-vision-betaandgrok-betaare valid and currently supported by xAI's API as of January 2026. They correctly map to vision and text capabilities respectively, so no changes are needed.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
lib/utils/index.ts
Outdated
| }); | ||
| try { | ||
| return google('gemini-3-pro-preview'); | ||
| return google(requireVision ? 'gemini-1.5-pro' : 'gemini-3-pro-preview'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
Google Gemini 3 Pro API model identifier 2025
💡 Result:
The Gemini 3 Pro API model identifier (model code / model ID) released in November 2025 is:
gemini-3-pro-preview[1][2]
Related Gemini 3 Pro variant:
gemini-3-pro-image-preview[1]
Sources: [1] Google Gemini API “Gemini models” docs, [2] Google Cloud Vertex AI “Gemini 3 Pro” model page.
🏁 Script executed:
find lib -name "index.ts" -type f | head -5Repository: QueueLab/QCX
Length of output: 109
🏁 Script executed:
cat -n lib/utils/index.ts | sed -n '45,105p'Repository: QueueLab/QCX
Length of output: 2529
Use gemini-3-pro-image-preview for Gemini 3 vision requests instead of falling back to gemini-1.5-pro.
The code uses gemini-1.5-pro when vision is required, but gemini-3-pro-image-preview is the proper vision model for the Gemini 3 line. When a user selects "Gemini 3", they should receive a Gemini 3 model variant in both vision and non-vision cases, not fall back to an older generation.
Change at lines 55 and 95:
return google(requireVision ? 'gemini-3-pro-image-preview' : 'gemini-3-pro-preview');
🤖 Prompt for AI Agents
In `@lib/utils/index.ts` at line 55, Replace the older Gemini model id used for
vision with the correct Gemini 3 vision variant: wherever the code returns
google(requireVision ? 'gemini-1.5-pro' : 'gemini-3-pro-preview') (i.e., the
return that calls google(...) with a ternary on requireVision), change the true
branch to 'gemini-3-pro-image-preview' so the call becomes google(requireVision
? 'gemini-3-pro-image-preview' : 'gemini-3-pro-preview'); update both
occurrences of this pattern in the file.
…eration - Added Edit (Pencil), Copy, and Delete (Trash) icons to UserMessage component. - Implemented inline message editing with optimistic UI state truncation. - Added 'resubmit' server action to handle message edits and trigger AI re-generation. - Added 'deleteMessageAction' to remove messages and trailing history. - Refactored AI generation logic into 'processChatWorkflow' for shared use. - Implemented trailing message deletion in the database layer using timestamps. - Fixed 'resolution search' vision model selection logic. - Ensured message ID consistency between client and server. - Updated user message styling to white bubble with border. - Added safety break in AI streaming loop to prevent empty response hangs. - Moving icons below message bubble for improved visibility. - Passing ID to UserMessage in ChatPanel to ensure icons appear immediately. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
@coderabbitai the system does not return tokens anywhere |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 2210
Current State❌ No token tracking implemented:
Where Token Data Is AvailableThe AI SDK responses typically include usage information like: {
usage: {
promptTokens: 150,
completionTokens: 85,
totalTokens: 235
}
}This data is available in:
What Would Need to Be ImplementedTo add token tracking, you would need:
Next StepsWould you like me to:
Let me know your preference! 🧠 Learnings used💡 Use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/actions.tsx (1)
604-709:⚠️ Potential issue | 🟠 MajorMissing error handling for database operations in
resubmit.The function calls
deleteTrailingMessagesandupdateMessagewithout try/catch. If either fails,processChatWorkflowstill executes with potentially inconsistent state, and the client has no indication of the failure.🛡️ Proposed fix
async function resubmit( messageId: string, content: string, mapProvider: 'mapbox' | 'google' = 'mapbox' -) { +): Promise<{ + id: string + isGenerating: StreamableValue<boolean> + component: React.ReactNode + isCollapsed: StreamableValue<boolean> +} | { error: string }> { 'use server' const aiState = getMutableAIState<typeof AI>() const uiStream = createStreamableUI() const isGenerating = createStreamableValue(true) const isCollapsed = createStreamableValue(false) const messages = aiState.get().messages const index = messages.findIndex(m => m.id === messageId) if (index === -1) { isGenerating.done(false) uiStream.done() return { - id: nanoid(), - isGenerating: isGenerating.value, - component: null, - isCollapsed: isCollapsed.value + error: 'Message not found' } } - const editedMessage = messages[index] - const chatId = aiState.get().chatId - - if (editedMessage.createdAt) { - await deleteTrailingMessages(chatId, new Date(editedMessage.createdAt)) + try { + const editedMessage = messages[index] + const chatId = aiState.get().chatId + // ... rest of the logic wrapped in try + } catch (error) { + console.error('Error in resubmit:', error) + isGenerating.done(false) + uiStream.done() + return { error: 'Failed to update message' } }
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 849-851: The function getUIStateFromAIState is declared async but
contains no await; either remove the async keyword from getUIStateFromAIState
and make its return type Promise<UIState> (or plain UIState if the signature
allows) and update any callers that currently await it (e.g., the call site that
does await getUIStateFromAIState) to drop the unnecessary await, or if you
intend to add async work later, keep async and add a brief comment above
getUIStateFromAIState clarifying it’s intentionally async for future async
operations; ensure the function signature and all callers (places invoking
getUIStateFromAIState) are updated consistently.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
app/actions.tsx
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- Added Edit, Copy, and Delete functionality to `UserMessage`. - Implemented `resubmit` and `deleteMessageAction` server actions. - Centralized generation logic in `processChatWorkflow` for robustness. - Fixed Next.js 15 build error: `onGetUIState` and `onSetAIState` are now explicitly exported named async functions to satisfy Server Action requirements. - Made `getUIStateFromAIState` local to avoid Server Action sync function restriction. - Resolved React Hook linting warnings in `chat-panel.tsx` and `mapbox-map.tsx`. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
components/map/mapbox-map.tsx (1)
355-466:⚠️ Potential issue | 🟠 MajorPrevent map teardown on camera updates.
The init effect includes
mapData.cameraState,position?.latitude, andposition?.longitudein its dependency array. SincecaptureMapCenteris registered on the 'moveend' event and callssetMapDatato updatemapData.cameraState, every map movement triggers a state update that causes this effect to re-run. This executes the cleanup function (which removes the map) and recreates it on every moveend, causing flicker and lost state.Remove these volatile dependencies—the init effect should only run once on mount. Position updates are already handled by a separate effect (line 524-528) and don't require the init effect to re-run.
🔧 Suggested fix
}, [ setMap, setIsMapLoaded, captureMapCenter, handleUserInteraction, - stopRotation, - mapData.cameraState, - position?.latitude, - position?.longitude + stopRotation ])
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
app/actions.tsxcomponents/chat-panel.tsxcomponents/map/mapbox-map.tsx
🧰 Additional context used
🧬 Code graph analysis (2)
components/chat-panel.tsx (2)
lib/schema/related.tsx (1)
PartialRelated(13-13)components/user-message.tsx (1)
UserMessage(36-199)
components/map/mapbox-map.tsx (4)
components/map/map-3d.tsx (2)
prevData(35-45)map3DElement(60-77)components/map/map-query-handler.tsx (2)
prevData(39-48)prevData(52-56)lib/hooks/use-map-3d-camera-events.ts (2)
useMap3DCameraEvents(6-23)map3DElement(12-22)components/map/google-map.tsx (1)
mapData(54-70)
🔇 Additional comments (5)
app/actions.tsx (2)
758-797: PassingidintoUserMessagealigns UI with per-message actions.Nice wiring for edit/delete flows.
923-991: onGetUIState/onSetAIState persistence hooks look consistent.The guard for unauthenticated users and derived title logic are sensible.
components/chat-panel.tsx (3)
39-45: MemoizedsetSuggestionskeeps parent callbacks in sync.Looks good.
110-121: Single message id used for optimistic UI + FormData.Great for consistent edit/delete behaviors across UI and server.
138-162: Debounce dependency update is correct.Ensures suggestion updates stay consistent when the callback changes.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Update LICENSE
- Added Edit, Copy, and Delete functionality to `UserMessage`. - Implemented `resubmit` and `deleteMessageAction` server actions. - Centralized generation logic in `processChatWorkflow` for robustness. - Fixed Next.js 15 build error: `onGetUIState` and `onSetAIState` are now inline functions with `'use server'` to satisfy Server Action requirements. - Made `getUIStateFromAIState` local to avoid Server Action sync function restriction. - Resolved React Hook linting warnings in `chat-panel.tsx` and `mapbox-map.tsx`. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/actions.tsx (2)
227-233:⚠️ Potential issue | 🟠 MajorGuard
userInputbefore calling.toLowerCase()Line 232 can throw when the submission is file-only (no text input). Add a null-safe guard or move this branch after the empty-input early return.
🔧 Suggested fix
- if (userInput.toLowerCase().trim() === 'what is a planet computer?' || userInput.toLowerCase().trim() === 'what is qcx-terra?') { + const normalizedInput = userInput?.toLowerCase().trim() + if ( + normalizedInput === 'what is a planet computer?' || + normalizedInput === 'what is qcx-terra?' + ) {
234-236:⚠️ Potential issue | 🟡 MinorFix QCX‑Terra pricing link formatting
Line 236 has a space between
]and(, so the markdown link won’t render.🔧 Suggested fix
- : `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing] (https://www.queue.cx/#pricing)`; + : `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`;
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
app/actions.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
app/actions.tsx (5)
lib/auth/get-current-user.ts (1)
getCurrentUserIdOnServer(112-127)lib/actions/chat.ts (5)
getSystemPrompt(268-283)deleteTrailingMessages(186-192)updateMessage(168-175)deleteMessage(177-184)saveChat(104-139)lib/types/index.ts (2)
AIMessage(59-77)Chat(49-57)lib/db/schema.ts (1)
messages(26-37)lib/actions/chat-db.ts (6)
deleteTrailingMessages(272-283)updateMessage(232-247)deleteMessage(254-265)Chat(9-9)saveChat(83-136)msg(117-121)
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
- Added Edit, Copy, and Delete functionality to user messages in the chat interface.
- Implemented `resubmit` and `deleteMessageAction` server actions with cascading deletion of trailing messages.
- Centralized generation logic in `processChatWorkflow` for reuse and robustness.
- Ensured Next.js 15 compatibility in `app/actions.tsx` by:
- Removing `'use server'` from the top of the file to allow sync exports.
- Adding inline `'use server'` to actions and `createAI` lifecycle methods.
- Matching the `main` branch's structure for the AI provider.
- Resolved React Hook linting warnings in `chat-panel.tsx` and `mapbox-map.tsx`.
- Verified the fix with a successful production build.
Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
@coderabbitai there are not tokens being generated in the system and there's a consistent refresh logo loader
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
🏁 Script executed: Length of output: 7742
The ProblemLooking at the code:
When an error occurs:
🏁 Script executed: Length of output: 1288 🏁 Script executed: Length of output: 6341
Let me check the resolution_search path and the submit function structure: 🏁 Script executed: Length of output: 8638
What's HappeningYour query "How does climate change affect our experience?" should trigger:
Since you see no tokens and infinite loader, the generation is failing at step 2 or 3. Immediate Debugging StepsPlease check your browser console (F12 → Console tab) and share what errors you see. Look for: Also, let me verify your environment setup: Questions:
The most common cause is missing or invalid API keys for the selected model. 🏁 Script executed: Length of output: 4477 🎯 FOUND THE ROOT CAUSE!
🏁 Script executed: Length of output: 1249 🎯 CONFIRMED: Missing API Keys
🏁 Script executed: Length of output: 4380 🎯 ROOT CAUSE CONFIRMED: Missing AI API Keys
❌ Current StateDATABASE_URL="postgresql://..."
NEXT_PUBLIC_MAPBOX_ACCESS_TOKEN="pk.eyJ1..."✅ Required SetupYou need at least ONE of these AI provider API keys: Option 1: XAI (Grok) - RecommendedXAI_API_KEY=your_xai_key_hereGet your key: https://console.x.ai/ Option 2: Gemini 3 ProGEMINI_3_PRO_API_KEY=your_gemini_key_hereGet your key: https://aistudio.google.com/apikey Option 3: OpenAI GPT-4oOPENAI_API_KEY=your_openai_key_hereGet your key: https://platform.openai.com/api-keys Also Required for Full Functionality:# For web search
TAVILY_API_KEY=your_tavily_key_here
# For Redis (chat history, settings)
UPSTASH_REDIS_REST_URL=your_upstash_url_here
UPSTASH_REDIS_REST_TOKEN=your_upstash_token_here
# For Supabase (authentication, database)
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url_here
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key_here
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key_here🚀 Setup Instructions
🔍 Priority OrderThe system tries providers in this order (when no model is selected):
Set at least XAI_API_KEY to get started! After adding the keys, the system should:
Let me know once you've added the API keys and I can help verify everything works! 🧠 Learnings used |
I've addressed the issue where no tokens were being generated and the loader would persist. This was caused by a combination of incorrect model selection for vision-based queries (resolution search) and a refactoring that introduced some Next.js 15 compatibility issues with Server Actions. I have:
|
- Added Edit, Copy, and Delete functionality to user messages. - Implemented `resubmit` and `deleteMessageAction` server actions. - Centralized generation logic in `processChatWorkflow` for robustness. - Fixed Next.js 15 build and prerender errors in `app/actions.tsx` by using inline `'use server'` and matching `main` branch structure. - Corrected model selection in `getModel` and `resolutionSearch` to support vision correctly. - Added fallback message when AI generation yields no tokens. - Fixed React Hook dependency warnings. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
@coderabbitai there are still no tokens being generated. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 10272 🏁 Script executed: Length of output: 5967
🔴 Current ProblemYour DATABASE_URL="postgresql://..."
NEXT_PUBLIC_MAPBOX_ACCESS_TOKEN=pk.eyJ1...Missing: All AI provider API keys (OpenAI, XAI, Gemini, etc.)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
lib/utils/index.ts (2)
101-120:⚠️ Potential issue | 🟡 MinorBedrock and OpenAI fallbacks ignore
requireVisionparameter.When falling back to Bedrock or OpenAI, the
requireVisionflag is not used, which may cause vision workflows to fail if they reach these providers. Consider adding vision-capable model selection or logging a warning when vision is required but unsupported.🛡️ Proposed fix with warning
if (awsAccessKeyId && awsSecretAccessKey) { + if (requireVision) { + console.warn('Bedrock fallback does not support vision models; results may be degraded.'); + } const bedrock = createAmazonBedrock({ bedrockOptions: { region: awsRegion, credentials: { accessKeyId: awsAccessKeyId, secretAccessKey: awsSecretAccessKey, }, }, }); const model = bedrock(bedrockModelId, { additionalModelRequestFields: { top_k: 350 }, }); return model; } + if (requireVision) { + console.warn('OpenAI fallback using gpt-4o which supports vision.'); + } const openai = createOpenAI({ apiKey: openaiApiKey, }); - return openai('gpt-4o'); + return openai(requireVision ? 'gpt-4o' : 'gpt-4o'); // gpt-4o supports vision
64-73: 🧹 Nitpick | 🔵 TrivialGPT-5.1 case doesn't explicitly handle
requireVision.The GPT-5.1 branch always returns
'gpt-4o'regardless of therequireVisionflag. Whilegpt-4odoes support vision, explicitly documenting or handling this would improve code clarity and consistency with other branches.lib/agents/resolution-search.tsx (1)
73-78:⚠️ Potential issue | 🟠 MajorStructured output compatibility issue with Grok models in fallback chain.
This agent uses
streamObjectwith a Zod schema for structured JSON output. The call togetModel(false, hasImage)uses the default fallback chain (Grok → Gemini → Bedrock → OpenAI), which attempts Grok first ifXAI_API_KEYis set. Per learnings, xAI's Grok models have unreliable integration with the AI SDK's structured output functions, and this agent is specifically identified as requiring OpenAI gpt-4o for structured output tasks.Consider:
- Using
getModel(true, hasImage)to force Gemini 3 (which defaults to gemini-1.5-pro and handles structured outputs better), or- Implementing model selection logic to prefer OpenAI gpt-4o for all structured output agents (resolution-search, suggest, task-manager, inquire, query-suggestor).
Note: This pattern affects multiple agents in the codebase that use
streamObject/generateObject.
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 653-658: The call to updateMessage is passing a raw string but
updateMessage(messageId: string, updates: Partial<NewMessage>) expects an
updates object; change the second argument to an object with a content property
(e.g., { content: ... }) using the same conditional JSON.stringify logic on
editedMessageInState.content so the signature matches updateMessage and the
content is stored on the NewMessage shape.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
app/actions.tsxlib/agents/resolution-search.tsxlib/utils/index.ts
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/utils/index.ts
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/utils/index.ts
🧬 Code graph analysis (3)
lib/utils/index.ts (1)
lib/actions/users.ts (1)
getSelectedModel(126-139)
app/actions.tsx (4)
lib/auth/get-current-user.ts (1)
getCurrentUserIdOnServer(112-127)lib/actions/chat.ts (4)
deleteTrailingMessages(186-192)updateMessage(168-175)deleteMessage(177-184)saveChat(104-139)lib/types/index.ts (2)
AIMessage(59-77)Chat(49-57)lib/db/schema.ts (1)
messages(26-37)
lib/agents/resolution-search.tsx (1)
lib/utils/index.ts (1)
getModel(20-121)
🔇 Additional comments (4)
app/actions.tsx (4)
19-25: LGTM!The new imports for
updateMessage,deleteMessage, anddeleteTrailingMessagesare correctly added to support the new message management functionality.
769-769: LGTM!The
asynckeyword was correctly removed fromgetUIStateFromAIStatesince it contains noawaitcalls, addressing the previous review feedback.
934-940: LGTM!The new
resubmitanddeleteMessageActionfunctions are correctly exported in the AI actions object, enabling the message edit/delete functionality from the UI.
1005-1013: > Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| await updateMessage( | ||
| messageId, | ||
| typeof editedMessageInState.content === 'object' | ||
| ? JSON.stringify(editedMessageInState.content) | ||
| : editedMessageInState.content | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect updateMessage call signature.
Per the relevant code snippet from lib/actions/chat.ts, updateMessage expects (messageId: string, updates: Partial<NewMessage>). The current call passes a raw string as the second argument instead of an object with a content property.
🐛 Proposed fix
await updateMessage(
messageId,
- typeof editedMessageInState.content === 'object'
- ? JSON.stringify(editedMessageInState.content)
- : editedMessageInState.content
+ {
+ content: typeof editedMessageInState.content === 'object'
+ ? JSON.stringify(editedMessageInState.content)
+ : editedMessageInState.content
+ }
)🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 653 - 658, The call to updateMessage is passing
a raw string but updateMessage(messageId: string, updates: Partial<NewMessage>)
expects an updates object; change the second argument to an object with a
content property (e.g., { content: ... }) using the same conditional
JSON.stringify logic on editedMessageInState.content so the signature matches
updateMessage and the content is stored on the NewMessage shape.
…ring and streaming logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
lib/agents/researcher.tsx (1)
88-92: 🧹 Nitpick | 🔵 TrivialRemove the unused
answerSectionvariable and its now-unnecessary imports.This JSX block is assigned but never referenced in the function. The
uiStream.update()calls that previously consumed it were removed, leaving the variable as dead code. This also makes theSectionandBotMessageimports (lines 10–11) unused.♻️ Proposed cleanup
Remove the unused variable and imports:
-import { Section } from '@/components/section' -import { BotMessage } from '@/components/message'- const answerSection = ( - <Section title="response"> - <BotMessage content={streamText.value} /> - </Section> - ) -app/actions.tsx (2)
542-549:⚠️ Potential issue | 🟠 Major
streamText.done()can be called twice, which may throw.When
useSpecificAPIis false,streamText.done()is called at line 543. If the researcher also returned an empty answer (e.g., it broke out of the loop at line 519 with no tool outputs and no answer), then line 548 callsstreamText.done(answer)again. Calling.done()on an already-finalized streamable value will throw a runtime error.🔧 Proposed fix — guard the fallback
} else { streamText.done() } - if (answer.length === 0 && !errorOccurred) { - answer = "I'm sorry, I couldn't generate a response. Please try again." - streamText.done(answer) - } + if (answer.length === 0 && !errorOccurred) { + answer = "I'm sorry, I couldn't generate a response. Please try again." + // streamText already done above; append fallback via uiStream instead + const fallbackStream = createStreamableValue(answer) + fallbackStream.done() + uiStream.update( + <Section title="response"> + <BotMessage content={fallbackStream.value} /> + </Section> + ) + }Alternatively, restructure so
streamText.done()at line 543 is only called whenanswer.length > 0:} else { - streamText.done() + if (answer.length > 0) { + streamText.done() + } } if (answer.length === 0 && !errorOccurred) { answer = "I'm sorry, I couldn't generate a response. Please try again." streamText.done(answer) }
546-600:⚠️ Potential issue | 🔴 Critical
aiState.done()is never called whenerrorOccurred && answer.length === 0(no exception thrown).Trace the flow: if the researcher sets
errorOccurred = truewithout throwing an exception and produces no answer, line 551 (!errorOccurred || answer.length > 0) evaluates tofalse, skipping theaiState.done()call at line 561. Thecatchblock doesn't fire (no exception), and thefinallyblock doesn't callaiState.done(). This leaves the mutable AI state unfinalised, which can cause the RSC framework to hang or produce undefined behavior on the client.🔧 Proposed fix — finalize aiState in the error-with-no-answer path
if (!errorOccurred || answer.length > 0) { // ... existing success/partial-success path ... + } else { + // Error occurred with no answer — finalize state as-is + aiState.done(aiState.get()) } } catch (error) {
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 450-467: The early-return branch after await inquire(uiStream,
messages) is redundantly finalizing streamables; remove the calls to
isGenerating.done(false) and uiStream.done() just before the return so only
isCollapsed.done(false) remains there, and let the existing finally block (which
calls isGenerating.done(false) and uiStream.done()) handle closing those two;
update the inquire branch around the inquire(...) call that sets
aiState.done(...) to only call isCollapsed.done(false) before returning.
- Around line 717-731: The code leaves aiState in a mutable/pending state when
findIndex returns -1; ensure aiState.done(...) is called in that branch so the
RSC state is finalized. Update the logic around the index check in the handler
that uses aiState, messages, and messageId (the block that currently calls
deleteTrailingMessages and deleteMessage) to call aiState.done with the
unmodified aiState.get() (or an explicitly identical state) when index === -1
(or move the aiState.done call into a finally/ensuring path) so the mutable
state is always finalized regardless of whether a message was found.
- Around line 981-989: The constructed sentinel assistant message added to
updatedMessages lacks the createdAt field whereas other AIMessage instances
include createdAt: new Date(); update the object literal inside the
updatedMessages array (the message with id: nanoid(), role: 'assistant',
content: `end`, type: 'end') to include createdAt: new Date() so AIMessage
consistency is preserved and callers that assume createdAt always exists won’t
break.
- Around line 603-607: The resubmit function lacks validation for empty content;
add an early guard in resubmit (use content.trim()) to detect empty or
whitespace-only input and short-circuit before updating the message or calling
processChatWorkflow: if empty, avoid calling updateMessage/processChatWorkflow
(or return early and optionally surface a user-facing error/notification) so
messageId is not overwritten with blank content and no blank user message is
processed.
In `@lib/agents/researcher.tsx`:
- Line 108: Remove the redundant type assertion on the model property by
updating the assignment that currently reads model: (await
getModel(useSpecificModel, hasImage)) as LanguageModel to just await
getModel(useSpecificModel, hasImage). Locate the model property in
lib/agents/researcher.tsx where getModel is called and remove the "as
LanguageModel" cast so the returned provider instance is used with its native
typing; ensure imports/usage of getModel remain unchanged.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
app/actions.tsxlib/agents/researcher.tsx
🧰 Additional context used
🧬 Code graph analysis (1)
lib/agents/researcher.tsx (1)
lib/utils/index.ts (1)
getModel(20-121)
🔇 Additional comments (4)
lib/agents/researcher.tsx (1)
127-134: Type casts on stream deltas are reasonable.The
as ToolCallPartandas ToolResultPartcasts align with the discriminated union narrowing — whendelta.typeis'tool-call'or'tool-result', the payload fields match the respective Part interfaces. These casts are safe for the current AI SDK shape.app/actions.tsx (3)
795-805: Good:idprop now passed toUserMessage.This correctly enables the per-message edit/delete functionality in the UI component.
930-948: LGTM —onGetUIStateand action registrations.The new
resubmitanddeleteMessageActionare correctly registered, andonGetUIStateproperly reconstructs UI state from AI state with the authentication-guardedonSetAIStatepersistence hook.
949-1010: LGTM —onSetAIStatepersistence hook is well-structured.Proper authentication guard, backward-compatible title extraction, and conditional persistence (only when a response exists) are all solid design choices.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| if (action.object.next === 'inquire') { | ||
| const inquiry = await inquire(uiStream, messages) | ||
| uiStream.done() | ||
| isGenerating.done() | ||
| isCollapsed.done(false) | ||
| aiState.done({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `inquiry: ${inquiry?.question}` | ||
| content: `inquiry: ${inquiry?.question}`, | ||
| createdAt: new Date() | ||
| } | ||
| ] | ||
| }) | ||
| isGenerating.done(false) | ||
| isCollapsed.done(false) | ||
| uiStream.done() | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Double .done() on streamables in the inquire early-return path.
When the task manager returns 'inquire', lines 464–466 call isGenerating.done(false), isCollapsed.done(false), and uiStream.done(). After the return at line 467, the finally block (lines 597–599) calls isGenerating.done(false) and uiStream.done() again. Calling .done() on an already-finalized streamable will throw.
🔧 Proposed fix — remove redundant `.done()` calls before the early return
Let the finally block handle closing the streamables, so the early-return path doesn't close them first:
aiState.done({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'assistant',
content: `inquiry: ${inquiry?.question}`,
createdAt: new Date()
}
]
})
- isGenerating.done(false)
- isCollapsed.done(false)
- uiStream.done()
+ isCollapsed.done(false)
return
}This way isGenerating.done(false) and uiStream.done() are only called once, in the finally block. isCollapsed.done(false) is called here because the finally block doesn't handle it, and the non-inquire path calls isCollapsed.done(true) at line 470.
Also applies to: 597-600
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 450 - 467, The early-return branch after await
inquire(uiStream, messages) is redundantly finalizing streamables; remove the
calls to isGenerating.done(false) and uiStream.done() just before the return so
only isCollapsed.done(false) remains there, and let the existing finally block
(which calls isGenerating.done(false) and uiStream.done()) handle closing those
two; update the inquire branch around the inquire(...) call that sets
aiState.done(...) to only call isCollapsed.done(false) before returning.
| async function resubmit( | ||
| messageId: string, | ||
| content: string, | ||
| mapProvider: 'mapbox' | 'google' = 'mapbox' | ||
| ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No validation for empty content in resubmit.
If content is an empty string, the message will be updated to empty content and processChatWorkflow will be invoked with an effectively blank user message. Consider adding an early guard.
🛡️ Proposed fix
async function resubmit(
messageId: string,
content: string,
mapProvider: 'mapbox' | 'google' = 'mapbox'
) {
'use server'
+ if (!content.trim()) {
+ return {
+ id: nanoid(),
+ isGenerating: createStreamableValue(false).value,
+ component: null,
+ isCollapsed: createStreamableValue(false).value
+ }
+ }
+
const aiState = getMutableAIState<typeof AI>()🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 603 - 607, The resubmit function lacks
validation for empty content; add an early guard in resubmit (use
content.trim()) to detect empty or whitespace-only input and short-circuit
before updating the message or calling processChatWorkflow: if empty, avoid
calling updateMessage/processChatWorkflow (or return early and optionally
surface a user-facing error/notification) so messageId is not overwritten with
blank content and no blank user message is processed.
| if (index !== -1) { | ||
| const messageToDelete = messages[index] | ||
| const chatId = aiState.get().chatId | ||
|
|
||
| if (messageToDelete.createdAt) { | ||
| await deleteTrailingMessages(chatId, new Date(messageToDelete.createdAt)) | ||
| } | ||
| await deleteMessage(messageId) | ||
|
|
||
| const truncatedMessages = messages.slice(0, index) | ||
| aiState.done({ | ||
| ...aiState.get(), | ||
| messages: truncatedMessages | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aiState is never finalized when the message is not found (index === -1).
If findIndex returns -1, the function exits without calling aiState.done(). Since getMutableAIState was called, the AI state is left in a mutable/pending state, which can cause the RSC framework to hang or behave unpredictably for the client.
🔧 Proposed fix
async function deleteMessageAction(messageId: string) {
'use server'
const aiState = getMutableAIState<typeof AI>()
const messages = aiState.get().messages
const index = messages.findIndex(m => m.id === messageId)
if (index !== -1) {
const messageToDelete = messages[index]
const chatId = aiState.get().chatId
if (messageToDelete.createdAt) {
await deleteTrailingMessages(chatId, new Date(messageToDelete.createdAt))
}
await deleteMessage(messageId)
const truncatedMessages = messages.slice(0, index)
aiState.done({
...aiState.get(),
messages: truncatedMessages
})
+ } else {
+ aiState.done(aiState.get())
}
}🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 717 - 731, The code leaves aiState in a
mutable/pending state when findIndex returns -1; ensure aiState.done(...) is
called in that branch so the RSC state is finalized. Update the logic around the
index check in the handler that uses aiState, messages, and messageId (the block
that currently calls deleteTrailingMessages and deleteMessage) to call
aiState.done with the unmodified aiState.get() (or an explicitly identical
state) when index === -1 (or move the aiState.done call into a finally/ensuring
path) so the mutable state is always finalized regardless of whether a message
was found.
app/actions.tsx
Outdated
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'end' message is missing createdAt, unlike all other messages in this PR.
Every other message construction in this file now includes createdAt: new Date(). The end sentinel message omits it, which is inconsistent and could break code that assumes createdAt is always present.
🔧 Proposed fix
const updatedMessages: AIMessage[] = [
...messages,
{
id: nanoid(),
role: 'assistant',
content: `end`,
- type: 'end'
+ type: 'end',
+ createdAt: new Date()
}
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const updatedMessages: AIMessage[] = [ | |
| ...messages, | |
| { | |
| id: nanoid(), | |
| role: 'assistant', | |
| content: `end`, | |
| type: 'end' | |
| } | |
| ] | |
| const updatedMessages: AIMessage[] = [ | |
| ...messages, | |
| { | |
| id: nanoid(), | |
| role: 'assistant', | |
| content: `end`, | |
| type: 'end', | |
| createdAt: new Date() | |
| } | |
| ] |
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 981 - 989, The constructed sentinel assistant
message added to updatedMessages lacks the createdAt field whereas other
AIMessage instances include createdAt: new Date(); update the object literal
inside the updatedMessages array (the message with id: nanoid(), role:
'assistant', content: `end`, type: 'end') to include createdAt: new Date() so
AIMessage consistency is preserved and callers that assume createdAt always
exists won’t break.
|
|
||
| const result = await nonexperimental_streamText({ | ||
| model: (await getModel(hasImage)) as LanguageModel, | ||
| model: (await getModel(useSpecificModel, hasImage)) as LanguageModel, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== Finding getModel function definition ==="
rg -n 'export.*function getModel|export.*const getModel' --type=ts --type=tsx
echo ""
echo "=== Finding LanguageModel type definition ==="
rg -n 'type LanguageModel|interface LanguageModel' --type=ts --type=tsx
echo ""
echo "=== All calls to getModel with context ==="
rg -B2 -A2 'getModel\(' --type=ts --type=tsx | head -80
echo ""
echo "=== Check if getModel has a direct return type annotation ==="
rg -A10 'async function getModel|export.*getModel.*=' --type=ts --type=tsx | head -50Repository: QueueLab/QCX
Length of output: 383
🏁 Script executed:
#!/bin/bash
echo "=== Finding all TypeScript/TSX files ==="
fd -e ts -e tsx | head -20
echo ""
echo "=== Search for getModel definition (no type restriction) ==="
rg -n 'export.*function getModel|export.*const getModel|^function getModel' -A5
echo ""
echo "=== Search for LanguageModel type ==="
rg -n 'type LanguageModel|interface LanguageModel'
echo ""
echo "=== Search for getModel calls ==="
rg -n 'getModel\(' -B1 -A1
echo ""
echo "=== Check the specific file in question ==="
cat -n lib/agents/researcher.tsx | sed -n '100,120p'Repository: QueueLab/QCX
Length of output: 4676
🏁 Script executed:
#!/bin/bash
echo "=== Full getModel implementation ==="
sed -n '20,80p' lib/utils/index.ts
echo ""
echo "=== Check imports in researcher.tsx ==="
head -30 lib/agents/researcher.tsx
echo ""
echo "=== Check imports in resolution-search.tsx (no cast version) ==="
head -20 lib/agents/resolution-search.tsx
echo ""
echo "=== Check what streamObject and nonexperimental_streamText expect ==="
rg -n 'streamObject|nonexperimental_streamText' -B1 -A3 | head -60Repository: QueueLab/QCX
Length of output: 8541
🏁 Script executed:
#!/bin/bash
echo "=== Verify return types from Vercel AI SDK provider calls ==="
cat lib/utils/index.ts | grep -A5 "return.*(" | head -30
echo ""
echo "=== Check if LanguageModel is a union type or what streamObject accepts ==="
rg -n "streamObject" lib/actions/suggest.ts -B2 -A10 | head -25
echo ""
echo "=== Why does resolution-search use streamObject without cast? ==="
cat lib/agents/resolution-search.tsx | sed -n '70,80p'Repository: QueueLab/QCX
Length of output: 2477
🏁 Script executed:
#!/bin/bash
echo "=== Check the complete getModel return statements to verify all paths return compatible types ==="
sed -n '20,150p' lib/utils/index.ts | grep -A2 "return"
echo ""
echo "=== Verify resolution-search.tsx full context around line 74 ==="
sed -n '60,85p' lib/agents/resolution-search.tsx
echo ""
echo "=== Check TypeScript error if cast is removed from researcher.tsx ==="
echo "Checking what Vercel AI SDK types are expected..."
rg -n "model:" lib/agents -A1 | head -40Repository: QueueLab/QCX
Length of output: 3370
Remove the unnecessary as LanguageModel cast—the return type is already compatible.
The getModel function returns provider instances from Vercel AI SDK (xai(), google(), openai()), which are natively typed as LanguageModel. The cast is redundant, as evidenced by lib/agents/resolution-search.tsx:74 which uses streamObject without the cast and works correctly. Consistent with the codebase pattern, remove the type assertion:
model: await getModel(useSpecificModel, hasImage),
🤖 Prompt for AI Agents
In `@lib/agents/researcher.tsx` at line 108, Remove the redundant type assertion
on the model property by updating the assignment that currently reads model:
(await getModel(useSpecificModel, hasImage)) as LanguageModel to just await
getModel(useSpecificModel, hasImage). Locate the model property in
lib/agents/researcher.tsx where getModel is called and remove the "as
LanguageModel" cast so the returned provider instance is used with its native
typing; ensure imports/usage of getModel remain unchanged.
…roper stream updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
app/actions.tsx (1)
561-568:⚠️ Potential issue | 🔴 CriticalDouble
.done()onstreamTextwhen researcher returns an empty answer.When
useSpecificAPIisfalseand the researcher loop produces an emptyanswerwithout error: Line 562 callsstreamText.done(), then Line 567 callsstreamText.done(answer)again. Calling.done()on an already-finalized streamable will throw a runtime error.🐛 Proposed fix — guard the fallback
} else { - streamText.done() + if (answer.length > 0) { + streamText.done() + } } if (answer.length === 0 && !errorOccurred) { answer = "I'm sorry, I couldn't generate a response. Please try again." streamText.done(answer) }Alternatively, consolidate into a single done path:
} else { - streamText.done() - } - - if (answer.length === 0 && !errorOccurred) { - answer = "I'm sorry, I couldn't generate a response. Please try again." + if (answer.length === 0 && !errorOccurred) { + answer = "I'm sorry, I couldn't generate a response. Please try again." + } streamText.done(answer) }lib/agents/resolution-search.tsx (1)
73-78:⚠️ Potential issue | 🟠 Major
streamObjectwith a schema may fail if X.AI is selected as the default provider.
getModel(false, hasImage)uses the default provider chain (XAI → Gemini → Bedrock → OpenAI). IfXAI_API_KEYis configured, an X.AI model will be selected first. However, this function usesstreamObjectwith a Zod schema (structured output), and X.AI's API rejects JSON Schema structured-output arguments — causing a runtime failure.Consider either forcing a structured-output-compatible model (e.g.,
getModel(true, hasImage)with a Gemini/OpenAI specific model) or adding arequireStructuredOutputparameter togetModelto skip X.AI in the provider chain.Based on learnings: "Workers that rely on ai.generateObject must not use the X.AI model because X.AI's chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs."
lib/utils/index.ts (2)
115-118:⚠️ Potential issue | 🟠 MajorFinal OpenAI fallback has no API-key guard, unlike every other provider.
All other providers check for the presence of their API key before attempting initialization. This fallback creates an OpenAI client with a potentially
undefinedAPI key, which will fail at request time with an opaque error rather than a clear configuration message. This was identified in PR comments as a contributor to the "infinite loader" issue.Proposed fix
+ if (openaiApiKey) { const openai = createOpenAI({ apiKey: openaiApiKey, }); return openai('gpt-4o'); + } + + throw new Error( + 'No AI provider is configured. Set at least one of: XAI_API_KEY, GEMINI_3_PRO_API_KEY, AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY, or OPENAI_API_KEY.' + );
62-67:⚠️ Potential issue | 🟠 MajorModel label "GPT-5.1" incorrectly uses
gpt-4o— should usegpt-5.1.Users selecting "GPT-5.1" will receive
gpt-4oinstead. GPT-5.1 is a valid OpenAI model (identifier:gpt-5.1), so the model ID on line 67 must be updated fromopenai('gpt-4o')toopenai('gpt-5.1').
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 840-848: The declarations in the switch branch for case 'related'
(relatedQueries, created via createStreamableValue and used to render
<SearchRelated>) must be scoped to that branch to avoid cross-clause access;
wrap the case 'related' body in its own block { ... } and move the JSON.parse,
relatedQueries.done call, and the return { id, component: <SearchRelated
relatedQueries={relatedQueries.value} /> } inside that block so relatedQueries
is block-scoped and cannot be referenced from other switch cases.
In `@components/followup-panel.tsx`:
- Around line 22-26: Replace the predictable Date.now().toString() id generation
in components/followup-panel.tsx with nanoid() to avoid collisions: add an
import for nanoid and change the id assignment used when building the
userMessage (the id variable used for the userMessage object and passed into
<UserMessage />) so it calls nanoid() instead of Date.now().toString().
In `@components/map/mapbox-map.tsx`:
- Around line 470-479: The effect that creates the Mapbox map should not include
mapData.cameraState, position?.latitude, or position?.longitude in its
dependency array because captureMapCenter (registered on moveend) updates
mapData.cameraState and position changes are handled by the separate flyTo
effect; remove those three dependencies from the useEffect dependency array (the
one referencing setMap, setIsMapLoaded, captureMapCenter, handleUserInteraction,
stopRotation, etc.) so the map is only created/destroyed on mount/unmount and
not on camera/position updates, and if your linter complains add an
eslint-disable-next-line comment with a short explanation that these values are
intentionally read only on mount.
In `@lib/utils/index.ts`:
- Around line 18-19: In getModel, the SPECIFIC_API_MODEL env value can be an
identifier that won't match the display-label switch cases in getModel; update
the switch inside getModel (the branch that handles selectedModel when
useSpecificAPI is true) to add a default case that logs a clear warning
including the actual selectedModel and suggests it may be misconfigured so
operators see the mismatch; keep existing behavior after logging (e.g., fall
through to provider cascade or explicitly map known identifiers) and reference
selectedModel and the getModel function when making the change.
- Line 38: The "Grok 4.2" case currently returns deprecated model IDs via
xai(requireVision ? 'grok-vision-beta' : 'grok-beta'); update those identifiers
to current xAI models by replacing 'grok-vision-beta' with 'grok-2-vision-1212'
and 'grok-beta' with 'grok-4-0709' in the xai(...) calls (both the main case
handling "Grok 4.2" and the fallback path that uses the same expression) so the
xai(...) invocations reference supported model names.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (9)
app/actions.tsxcomponents/chat-panel.tsxcomponents/followup-panel.tsxcomponents/header-search-button.tsxcomponents/map/mapbox-map.tsxcomponents/search-related.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/utils/index.ts
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/utils/index.tslib/agents/researcher.tsx
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/utils/index.ts
🧬 Code graph analysis (8)
components/chat-panel.tsx (3)
lib/schema/related.tsx (1)
PartialRelated(13-13)components/user-message.tsx (1)
UserMessage(36-199)components/chat.tsx (1)
Chat(29-216)
lib/utils/index.ts (1)
lib/actions/users.ts (1)
getSelectedModel(126-139)
components/header-search-button.tsx (2)
components/user-message.tsx (1)
UserMessage(36-199)app/page.tsx (1)
Page(9-18)
components/search-related.tsx (1)
components/user-message.tsx (1)
UserMessage(36-199)
components/map/mapbox-map.tsx (4)
components/map/map-3d.tsx (2)
prevData(35-45)p(29-49)components/map/google-map.tsx (1)
mapData(54-70)components/map/map-query-handler.tsx (2)
prevData(39-48)useMapData(30-82)components/map/map-data-context.tsx (1)
CameraState(5-13)
app/actions.tsx (5)
lib/auth/get-current-user.ts (1)
getCurrentUserIdOnServer(112-127)lib/actions/chat.ts (4)
getSystemPrompt(268-283)deleteTrailingMessages(186-192)updateMessage(168-175)deleteMessage(177-184)lib/agents/researcher.tsx (1)
researcher(82-159)lib/types/index.ts (1)
AIMessage(59-77)lib/actions/chat-db.ts (3)
deleteTrailingMessages(272-283)updateMessage(232-247)deleteMessage(254-265)
lib/agents/resolution-search.tsx (1)
lib/utils/index.ts (1)
getModel(18-119)
components/followup-panel.tsx (3)
components/user-message.tsx (1)
UserMessage(36-199)lib/actions/chat.ts (1)
updateDrawingContext(165-205)components/sidebar/chat-history-client.tsx (1)
ChatHistoryClientProps(26-28)
🪛 Biome (2.3.13)
app/actions.tsx
[error] 841-843: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🔇 Additional comments (5)
components/search-related.tsx (1)
32-36: LGTM — consistent id generation and propagation.The
nanoid()id is correctly generated and passed to both the message object and theUserMessagecomponent, consistent with the pattern used inchat-panel.tsxandheader-search-button.tsx.components/chat-panel.tsx (1)
37-43: LGTM — id generation and suggestions callback are well-structured.The
nanoid()id is consistently generated and propagated to both the message object, theUserMessagecomponent, and the server viaFormData. TheuseCallbackwrapper forsetSuggestionscorrectly synchronizes local and parent state.Also applies to: 108-118
components/header-search-button.tsx (1)
51-58: LGTM — consistent id wiring for resolution search messages.The
nanoid()id is correctly generated and passed to both the message object and theUserMessagecomponent, aligning with the pattern across other components.lib/agents/researcher.tsx (1)
133-140: LGTM — tool delta casts are appropriate within discriminated switch cases.The
as ToolCallPartandas ToolResultPartcasts at Lines 134 and 139 narrow the delta type within the correctcasebranches of thefullStreamdiscriminated union, which is a standard pattern with the AI SDK.app/actions.tsx (1)
893-904: The synchronous implementation ofgetUIStateFromAIStateis correct and required by the API.The Vercel AI SDK RSC
createAIAPI explicitly requiresonGetUIStateto be synchronous with signature() => UIState. The function at line 784 is properly typed and implemented as a synchronous arrow function(aiState: AIState): UIState, making it compatible with theonGetUIStateparameter at line 903.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| case 'related': | ||
| const relatedQueries = createStreamableValue<RelatedQueries>({ | ||
| items: [] | ||
| }) | ||
| const relatedQueries = createStreamableValue( | ||
| JSON.parse(content as string) | ||
| ) | ||
| relatedQueries.done(JSON.parse(content as string)) | ||
| return { | ||
| id, | ||
| component: ( | ||
| <Section title="Related" separator={true}> | ||
| <SearchRelated relatedQueries={relatedQueries.value} /> | ||
| </Section> | ||
| ) | ||
| component: <SearchRelated relatedQueries={relatedQueries.value} /> | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap case 'related' declarations in a block to prevent cross-clause access.
The relatedQueries variable declared at Line 841 inside a switch case can be erroneously accessed from other clauses. Biome flags this as noSwitchDeclarations.
🔧 Proposed fix
- case 'related':
- const relatedQueries = createStreamableValue(
- JSON.parse(content as string)
- )
- relatedQueries.done(JSON.parse(content as string))
- return {
- id,
- component: <SearchRelated relatedQueries={relatedQueries.value} />
- }
+ case 'related': {
+ const relatedQueries = createStreamableValue(
+ JSON.parse(content as string)
+ )
+ relatedQueries.done(JSON.parse(content as string))
+ return {
+ id,
+ component: <SearchRelated relatedQueries={relatedQueries.value} />
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'related': | |
| const relatedQueries = createStreamableValue<RelatedQueries>({ | |
| items: [] | |
| }) | |
| const relatedQueries = createStreamableValue( | |
| JSON.parse(content as string) | |
| ) | |
| relatedQueries.done(JSON.parse(content as string)) | |
| return { | |
| id, | |
| component: ( | |
| <Section title="Related" separator={true}> | |
| <SearchRelated relatedQueries={relatedQueries.value} /> | |
| </Section> | |
| ) | |
| component: <SearchRelated relatedQueries={relatedQueries.value} /> | |
| } | |
| case 'related': { | |
| const relatedQueries = createStreamableValue( | |
| JSON.parse(content as string) | |
| ) | |
| relatedQueries.done(JSON.parse(content as string)) | |
| return { | |
| id, | |
| component: <SearchRelated relatedQueries={relatedQueries.value} /> | |
| } | |
| } |
🧰 Tools
🪛 Biome (2.3.13)
[error] 841-843: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 840 - 848, The declarations in the switch
branch for case 'related' (relatedQueries, created via createStreamableValue and
used to render <SearchRelated>) must be scoped to that branch to avoid
cross-clause access; wrap the case 'related' body in its own block { ... } and
move the JSON.parse, relatedQueries.done call, and the return { id, component:
<SearchRelated relatedQueries={relatedQueries.value} /> } inside that block so
relatedQueries is block-scoped and cannot be referenced from other switch cases.
| const id = Date.now().toString() | ||
| const userMessage = { | ||
| id: Date.now(), | ||
| id, | ||
| isGenerating: false, | ||
| component: <UserMessage content={input} /> | ||
| component: <UserMessage id={id} content={input} /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent id generation — use nanoid() instead of Date.now().toString().
Every other component (chat-panel.tsx, search-related.tsx, header-search-button.tsx) uses nanoid() for message ids. Date.now().toString() can produce collisions under rapid submissions and yields predictable, non-unique values. Since the id is used for edit/delete lookups, a collision would target the wrong message.
🔧 Proposed fix
Add the import:
import { ArrowRight } from 'lucide-react'
import { useMapData } from './map/map-data-context'
+import { nanoid } from 'nanoid'Then replace the id generation:
- const id = Date.now().toString()
+ const id = nanoid()🤖 Prompt for AI Agents
In `@components/followup-panel.tsx` around lines 22 - 26, Replace the predictable
Date.now().toString() id generation in components/followup-panel.tsx with
nanoid() to avoid collisions: add an import for nanoid and change the id
assignment used when building the userMessage (the id variable used for the
userMessage object and passed into <UserMessage />) so it calls nanoid() instead
of Date.now().toString().
| }, [ | ||
| setMap, | ||
| setIsMapLoaded, | ||
| captureMapCenter, | ||
| handleUserInteraction, | ||
| stopRotation, | ||
| mapData.cameraState, | ||
| position?.latitude, | ||
| position?.longitude | ||
| ]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: mapData.cameraState in this dependency array creates an infinite destroy/recreate loop.
This useEffect creates the Mapbox map instance, and its cleanup (line 462) sets map.current = null. When any dependency changes, cleanup destroys the map and the effect re-runs, passing the !map.current guard at line 374.
The problem: captureMapCenter (registered on moveend at line 409) updates mapData.cameraState via setMapData. So the cycle is:
- Map initializes →
moveendfires →captureMapCenterupdatescameraState cameraStatereference changes → this effect re-runs- Cleanup destroys the map → effect recreates the map →
moveendfires → ∞ loop
Similarly, position?.latitude / position?.longitude will cause a full map teardown and rebuild on every position change, even though there's already a dedicated useEffect at lines 548–552 that handles position updates via flyTo.
Remove mapData.cameraState, position?.latitude, and position?.longitude from this dependency array. The initial values from these are already read inside the effect body — they only need to be read once at mount time, which is the correct behavior for a map initialization effect. If the linter complains, suppress it with an eslint-disable comment explaining the intent.
Proposed fix
}, [
setMap,
setIsMapLoaded,
captureMapCenter,
handleUserInteraction,
stopRotation,
- mapData.cameraState,
- position?.latitude,
- position?.longitude
+ // Note: mapData.cameraState and position are intentionally excluded.
+ // They are read once during initialization; subsequent updates are
+ // handled by dedicated useEffects (lines 548-552, 555-567).
+ // eslint-disable-next-line react-hooks/exhaustive-deps
])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| }, [ | |
| setMap, | |
| setIsMapLoaded, | |
| captureMapCenter, | |
| handleUserInteraction, | |
| stopRotation, | |
| mapData.cameraState, | |
| position?.latitude, | |
| position?.longitude | |
| ]) | |
| }, [ | |
| setMap, | |
| setIsMapLoaded, | |
| captureMapCenter, | |
| handleUserInteraction, | |
| stopRotation, | |
| // Note: mapData.cameraState and position are intentionally excluded. | |
| // They are read once during initialization; subsequent updates are | |
| // handled by dedicated useEffects (lines 548-552, 555-567). | |
| // eslint-disable-next-line react-hooks/exhaustive-deps | |
| ]) |
🤖 Prompt for AI Agents
In `@components/map/mapbox-map.tsx` around lines 470 - 479, The effect that
creates the Mapbox map should not include mapData.cameraState,
position?.latitude, or position?.longitude in its dependency array because
captureMapCenter (registered on moveend) updates mapData.cameraState and
position changes are handled by the separate flyTo effect; remove those three
dependencies from the useEffect dependency array (the one referencing setMap,
setIsMapLoaded, captureMapCenter, handleUserInteraction, stopRotation, etc.) so
the map is only created/destroyed on mount/unmount and not on camera/position
updates, and if your linter complains add an eslint-disable-next-line comment
with a short explanation that these values are intentionally read only on mount.
| export async function getModel(useSpecificAPI: boolean = false, requireVision: boolean = false) { | ||
| const selectedModel = useSpecificAPI ? (process.env.SPECIFIC_API_MODEL || 'Gemini 3') : await getSelectedModel(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SPECIFIC_API_MODEL silently falls through if its value doesn't match a display label.
The switch cases use display labels ('Grok 4.2', 'Gemini 3', 'GPT-5.1'), but SPECIFIC_API_MODEL is an environment variable that an operator might reasonably set to a model identifier (e.g. gemini-3-pro-preview). A mismatch silently skips all explicit cases and falls into the default provider cascade, which may pick a completely different provider/model with no warning.
Consider adding a default case inside the switch that logs a warning when selectedModel doesn't match any known label, so misconfigurations surface immediately.
Proposed fix
case 'GPT-5.1':
// ...existing code...
+ default:
+ console.warn(`Unknown selectedModel "${selectedModel}", falling back to default provider cascade.`);
+ break;
}
}🤖 Prompt for AI Agents
In `@lib/utils/index.ts` around lines 18 - 19, In getModel, the SPECIFIC_API_MODEL
env value can be an identifier that won't match the display-label switch cases
in getModel; update the switch inside getModel (the branch that handles
selectedModel when useSpecificAPI is true) to add a default case that logs a
clear warning including the actual selectedModel and suggests it may be
misconfigured so operators see the mismatch; keep existing behavior after
logging (e.g., fall through to provider cascade or explicitly map known
identifiers) and reference selectedModel and the getModel function when making
the change.
| }); | ||
| try { | ||
| return xai('grok-4-fast-non-reasoning'); | ||
| return xai(requireVision ? 'grok-vision-beta' : 'grok-beta'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
xAI Grok API model identifiers 2025 2026 documentation
💡 Result:
xAI Grok API model identifiers you can pass as the model field (as documented across 2025–2026 xAI docs):
Text / reasoning (chat)
- Grok 4 (flagship)
- Canonical model name:
grok-4-0709 - Aliases:
grok-4,grok-4-latest[2]
- Canonical model name:
- Grok 3
- Model name:
grok-3 - Aliases:
grok-3-latest,grok-3-beta,grok-3-fast,grok-3-fast-latest,grok-3-fast-beta[1]
- Model name:
- Grok 4 Fast / Grok 4.1 Fast (2M-context “fast” family, as listed on x.ai)
grok-4-fast-reasoning,grok-4-fast-non-reasoninggrok-4-1-fast-reasoning,grok-4-1-fast-non-reasoning[3]- Docs/examples also reference family aliases like
grok-4-1-fast/grok-4-fastfor tooling guides. [4]
Coding
grok-code-fast-1[3]
Vision (image understanding → text output)
grok-2-vision-1212(also listed on x.ai as an available model) [3]- Older/deprecated:
grok-vision-beta[3]
Image generation (text → images)
grok-2-image-1212[3]
Older / deprecated
grok-2-1212[3]grok-beta(deprecated/older beta naming) [3]
Where to verify for your account/team: xAI notes models can be deprecated/obsolete over time; the authoritative “enabled for you” list is in the xAI Console and can also be listed via the Management API. [5]
Sources:
[1] xAI Docs — Grok 3 model page (details/aliases) https://docs.x.ai/docs/models/grok-3-fast
[2] xAI Docs — Grok 4 model page (details/aliases) https://docs.x.ai/docs/models/grok-4
[3] xAI — API models & pricing list (shows available model identifiers) https://x.ai/api/
[4] xAI Docs — Tools overview (examples using grok-4-1-fast, grok-4-fast) https://docs.x.ai/docs/guides/tools/overview
[5] xAI Docs — Migrating to New Models / deprecation guidance https://docs.x.ai/docs/key-information/migrating-to-new-models
🏁 Script executed:
# Examine the context around line 38 and the fallback on line 82 mentioned in the comment
cat -n lib/utils/index.ts | head -90Repository: QueueLab/QCX
Length of output: 4034
Update deprecated xAI model identifiers — grok-vision-beta and grok-beta are no longer supported.
The case label "Grok 4.2" uses deprecated xAI model identifiers that xAI has sunset. Current xAI models are grok-2-vision-1212 (for vision), grok-4-0709 (for text), or other current variants. Both line 38 and line 82 (fallback path) use the same deprecated identifiers and must be updated to avoid API failures.
🤖 Prompt for AI Agents
In `@lib/utils/index.ts` at line 38, The "Grok 4.2" case currently returns
deprecated model IDs via xai(requireVision ? 'grok-vision-beta' : 'grok-beta');
update those identifiers to current xAI models by replacing 'grok-vision-beta'
with 'grok-2-vision-1212' and 'grok-beta' with 'grok-4-0709' in the xai(...)
calls (both the main case handling "Grok 4.2" and the fallback path that uses
the same expression) so the xai(...) invocations reference supported model
names.


User description
This PR implements the requested message editing functionality and adds individual message deletion.
Key changes:
lib/actions/chat-db.tsandlib/actions/chat.ts.saveChatwas updated to handle upserts usingonConflictDoUpdateto prevent primary key violations during re-saves.resubmitanddeleteMessageActioninapp/actions.tsx. Resubmitting a message updates its content in the DB, deletes all subsequent messages in that chat, and triggers a new AI response. Deleting a message removes it and all trailing messages.UserMessagecomponent now includes "Edit" and "Delete" icons visible on hover. It supports in-place editing usingreact-textarea-autosizeand provides a confirmation dialog for deletions usingAlertDialog.submit()was extracted intoprocessChatWorkflow()to be shared withresubmit().getCurrentUserIdOnServer()and passed the currentmapProviderfrom the client to ensure consistency.Verified the implementation with
bun run buildand fixed several type errors and a missing "use client" directive.PR created automatically by Jules for task 11829601418741662983 started by @ngoiyaeric
PR Type
Enhancement
Description
Implement message editing with inline textarea and save functionality
Add message deletion with confirmation dialog and trailing message cleanup
Refactor AI workflow into reusable
processChatWorkflowfunctionAdd database operations for updating, deleting, and managing messages
Enhance UI with edit/delete icons visible on hover with proper state management
Diagram Walkthrough
File Walkthrough
chat-db.ts
Database layer message operations implementationlib/actions/chat-db.ts
gtoperator import from drizzle-orm for timestamp comparisonsonConflictDoUpdateinsaveChatto handle message upsertsand prevent primary key violations
updateMessagefunction to update message content by IDdeleteMessagefunction to delete a specific messagedeleteTrailingMessagesfunction to delete all messages createdafter a given timestamp
chat.ts
Server action wrappers for message operationslib/actions/chat.ts
updateMessage,deleteMessage,deleteTrailingMessagesupdateMessagewrapper function that calls database layer witherror handling
deleteMessagewrapper function with error handlingdeleteTrailingMessageswrapper function with error handlingactions.tsx
Server actions for message editing and deletion workflowapp/actions.tsx
submitinto reusableprocessChatWorkflowfunctionresubmitserver action to handle message editing with trailingmessage deletion
deleteMessageActionserver action to handle message deletiongetCurrentUserIdOnServer()call
submitto passmapProviderto workflow and use newprocessChatWorkflowfunctionresubmitanddeleteMessageActionin AI actions configurationgetUIStateFromAIStateto pass messageidprop toUserMessagecomponent
user-message.tsx
User message component with edit and delete UIcomponents/user-message.tsx
handleEditto initialize editing state with currentmessage content
handleSaveto submit edited message viaresubmitactionhandleDeleteto trigger message deletion viadeleteMessageActiontrailing messages
Summary by CodeRabbit
New Features
Bug Fixes / Improvements
Chores