-
Notifications
You must be signed in to change notification settings - Fork 51
Daily branch 2025 09 19 #34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- convex/chats.deleteChat: treat missing or unauthorized as no-op; avoid throwing in catch - ChatItem: guard duplicate clicks; treat 'Chat not found' as success - SidebarUserNav: guard duplicate delete-all; proceed even on errors
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughAdds serialized delete guards and centralized cleanup in UI; implements resumable chat streaming with Redis-backed context and a new stream GET route; introduces DataStreamProvider and auto-resume client flow; deduplicates messages and softens some backend errors; toggles Next.js image optimization and updates schema/DB actions to track active_stream_id. Changes
Sequence Diagram(s)sequenceDiagram
%% Resumable streaming high-level flow
autonumber
actor Client
participant API as POST /api/chat
participant StreamCtx as ResumableStreamContext (Redis)
participant DB as Convex (chats)
Client->>API: start/continue chat request
API->>StreamCtx: getStreamContext()
alt StreamCtx available
API->>StreamCtx: create resumable stream -> streamId
API->>DB: setActiveStreamId(chatId, streamId)
StreamCtx-->>Client: streaming body (resumable wrapped SSE)
else StreamCtx missing
API-->>Client: normal SSE stream
end
Note right of DB: on finish -> clear active_stream_id
sequenceDiagram
%% Auto-resume client behavior
autonumber
actor Browser
participant Chat as Chat component
participant DataProv as DataStreamProvider
participant Hook as useAutoResume
participant API as GET /api/chat/[id]/stream
Browser->>Chat: mount (autoResume=true)
Chat->>Hook: init with resumeStream
Hook->>API: call resumeStream() once
API-->>DataProv: SSE/resumable emits data-appendMessage
DataProv->>Hook: first data-appendMessage observed
Hook->>Chat: append message to UI, setIsAutoResuming(false)
sequenceDiagram
%% Deletion flow (UI)
autonumber
actor User
participant UI as ChatItem / SidebarUserNav
participant Convex as convex/chats.deleteChat
participant Router as Router/Window
User->>UI: click delete
UI->>UI: if isDeleting then return
UI->>UI: isDeleting = true
UI->>Convex: deleteChat(...)
alt success or not-found
UI->>Router: navigate to "/" or reinit chat if active
else error
UI->>UI: console.error (swallowed)
UI->>Router: navigate to "/"
end
UI->>UI: isDeleting = false (finally)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
convex/messages.ts (2)
318-323: Avoid .filter() in Convex queries; use an index for role.Per guidelines, replace filter with a compound index on (chat_id, role) and query via withIndex.
Apply:
- const lastAssistantMessage = await ctx.db - .query("messages") - .withIndex("by_chat_id", (q) => q.eq("chat_id", args.chatId)) - .filter((q) => q.eq(q.field("role"), "assistant")) - .order("desc") - .first(); + const lastAssistantMessage = await ctx.db + .query("messages") + .withIndex("by_chat_id_and_role", (q) => + q.eq("chat_id", args.chatId).eq("role", "assistant"), + ) + .order("desc") + .first();And in convex/schema.ts define the index:
// messages table index("by_chat_id_and_role", ["chat_id", "role", "_creationTime"]);
664-671: Add a compound index (chat_id, _creationTime) or change the query — current by_chat_id lacks _creationTime.convex/schema.ts shows .index("by_chat_id", ["id"]) and .index("by_chat_id", ["chat_id"]) (lines 27, 50); neither includes _creationTime, so the q.gt("_creationTime") range in convex/messages.ts (lines 664–671) cannot use that index. Define e.g. index("by_chat_id_and_created", ["chat_id","_creationTime"]) and update .withIndex(...) to use it (or remove the range). Also resolve the duplicate "by_chat_id" index name.
🧹 Nitpick comments (10)
convex/messages.ts (5)
76-87: verifyChatOwnership never returns false; dead check.verifyChatOwnership throws on not found/unauthorized and otherwise returns true. The if (!chatExists) branch is unreachable.
Apply:
- const chatExists: boolean = await ctx.runQuery( - internal.messages.verifyChatOwnership, - { - chatId: args.chatId, - userId: args.userId, - }, - ); - - if (!chatExists) { - throw new Error("Chat not found"); - } + await ctx.runQuery(internal.messages.verifyChatOwnership, { + chatId: args.chatId, + userId: args.userId, + });
268-281: Same dead check pattern here.Drop the redundant chatExists boolean and false-branch.
Apply:
- const chatExists: boolean = await ctx.runQuery( - internal.messages.verifyChatOwnership, - { - chatId: args.chatId, - userId: user.subject, - }, - ); - - if (!chatExists) { - throw new Error("Chat not found"); - } + await ctx.runQuery(internal.messages.verifyChatOwnership, { + chatId: args.chatId, + userId: user.subject, + });
343-346: Same dead check pattern here.Remove the unused chatExists boolean and check.
- const chatExists: boolean = await ctx.runQuery( - internal.messages.verifyChatOwnership, - { - chatId: args.chatId, - userId: user.subject, - }, - ); - - if (!chatExists) { - throw new Error("Chat not found"); - } + await ctx.runQuery(internal.messages.verifyChatOwnership, { + chatId: args.chatId, + userId: user.subject, + });
482-498: Search builds full result sets in memory; cap and avoid N+1..collect() over search indexes can be large. Cap with take() and avoid per-message chat lookups.
- const messageResults = await ctx.db + const SEARCH_CAP = 200; + const messageResults = await ctx.db .query("messages") .withSearchIndex("search_content", (q) => q.search("content", args.searchQuery).eq("user_id", user.subject), ) - .collect(); + .take(SEARCH_CAP);Also drop the per-message chat query in Lines 516-521 and use msg.update_time and omit chat_title (optional field) to remove N+1.
124-127: Preserve error semantics when wrapping.Re-throwing a generic Error loses ConvexError codes. Either rethrow original or wrap with ConvexError including original data.
- console.error("Failed to save message:", error); - throw new Error("Failed to save message"); + console.error("Failed to save message:", error); + throw error;convex/chats.ts (1)
345-347: Don’t swallow authorization errors.Treat not found as no-op, but rethrow Unauthorized to surface misuse.
- } catch (error) { - console.error("Failed to delete chat:", error); - // Avoid surfacing errors to the client; treat as a no-op - return null; + } catch (error) { + console.error("Failed to delete chat:", error); + if (error instanceof Error && error.message.includes("Unauthorized")) { + throw error; + } + // Swallow other errors to behave idempotently client-side + return null;app/components/ChatItem.tsx (2)
84-97: Redundant not-found handling (server now returns null).deleteChat now swallows not-found; this catch is unlikely to trigger. Safe to keep, but can be simplified.
221-223: Disable the Delete item while deleting (a11y/UX).Reflect isDeleting in the control to signal progress.
- <DropdownMenuItem - onClick={handleDelete} - className="text-destructive focus:text-destructive" - > + <DropdownMenuItem + onClick={handleDelete} + disabled={isDeleting} + aria-disabled={isDeleting} + className="text-destructive focus:text-destructive" + >app/components/SidebarUserNav.tsx (1)
118-123: Prefer router navigation over full reload.Use Next’s router for smoother UX; keep finally cleanup.
- setShowDeleteDialog(false); - window.location.href = "/"; + setShowDeleteDialog(false); + // Optional: use next/navigation for client-side nav + // const router = useRouter(); + // router.replace("/"); + window.location.href = "/"; // fallbacknext.config.ts (1)
5-5: Global image optimization disabled — confirm production intent and tighten host allowlist
- next.config.ts — line 5:
unoptimized: true(global). next.config.ts — line 27:remotePatternscontainshostname: "**"(wildcard).- No per-component
unoptimizedprops found; images importingnext/imageare in: components/messages/citation-display.tsx, app/components/FilePartRenderer.tsx, app/components/ImageViewer.tsx.- Actions: confirm you intend to disable Next/Image optimization in production; ensure a CDN/image proxy fronts assets to avoid perf regressions; replace the
**wildcard with explicit hostnames (or remove it) to prevent accidental hotlinking.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
app/components/ChatItem.tsx(2 hunks)app/components/SidebarUserNav.tsx(1 hunks)components/messages/citation-display.tsx(0 hunks)convex/chats.ts(2 hunks)convex/messages.ts(2 hunks)next.config.ts(1 hunks)
💤 Files with no reviewable changes (1)
- components/messages/citation-display.tsx
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
app/components/ChatItem.tsxconvex/messages.tsnext.config.tsapp/components/SidebarUserNav.tsxconvex/chats.ts
convex/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
convex/**/*.ts: Always use the new Convex function syntax (query/mutation/action objects with args/returns/handler) when defining Convex functions
When a function returns null, include returns: v.null() and return null explicitly
Use internalQuery/internalMutation/internalAction for private functions callable only by other Convex functions; do not expose sensitive logic via public query/mutation/action
Use query/mutation/action only for public API functions
Do not try to register functions via the api or internal objects
Always include argument and return validators for all Convex functions (query/internalQuery/mutation/internalMutation/action/internalAction)
In JS implementations, functions without an explicit return value implicitly return null
Use ctx.runQuery from queries/mutations/actions to call a query
Use ctx.runMutation from mutations/actions to call a mutation
Use ctx.runAction from actions to call an action
Only call an action from another action when crossing runtimes (e.g., V8 to Node); otherwise extract shared helper code
Minimize calls from actions to queries/mutations to avoid race conditions from splitting transactions
Pass FunctionReference values (from api/internal) to ctx.runQuery/ctx.runMutation/ctx.runAction; do not pass function implementations
When calling a function in the same file via ctx.run*, add an explicit return type annotation at the call site to avoid TS circularity
Use the generated api object for public functions and internal object for internal functions from convex/_generated/api.ts
Respect file-based routing for function references: e.g., convex/example.ts export f -> api.example.f; nested paths map to dot-separated namespaces
For paginated queries use paginationOptsValidator in args and .paginate(args.paginationOpts) on a query
v.bigint() is deprecated; use v.int64() for signed 64-bit integers
Use v.record(keys, values) for record-like data; v.map() and v.set() are not supported
For full-text search, use withSearchIndex("ind...
Files:
convex/messages.tsconvex/chats.ts
🧬 Code graph analysis (1)
app/components/SidebarUserNav.tsx (1)
convex/chats.ts (1)
deleteAllChats(407-475)
🔇 Additional comments (4)
convex/messages.ts (1)
216-230: LGTM: not-found handled quietly; unexpected errors logged.This matches the PR goal to treat missing chats as no-ops while keeping signal for other failures.
convex/chats.ts (1)
295-295: Idempotent delete on missing chat — good.Returning null on not found aligns UI behavior with a no-op delete.
app/components/ChatItem.tsx (1)
67-75: LGTM: deletion serialized with in-flight guard.Prevents double-submit and keeps state consistent.
app/components/SidebarUserNav.tsx (1)
113-115: LGTM: guard against overlapping “delete all” actions.Early return + isDeleting flag is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/actions/index.ts (1)
40-45: Guard against empty message arrays and missing parts.Directly indexing
truncatedMessages[0]can throw when the array is empty.Apply:
- const firstMessage = truncatedMessages[0]; - const textContent = firstMessage.parts - .filter((part: { type: string; text?: string }) => part.type === "text") - .map((part: { type: string; text?: string }) => part.text || "") - .join(" "); + const firstMessage = truncatedMessages[0]; + if (!firstMessage) { + return "New chat"; + } + const textContent = + (firstMessage.parts ?? []) + .filter((part: { type: string; text?: string }) => part.type === "text") + .map((part: { type: string; text?: string }) => part.text || "") + .join(" ") + .trim() || "New chat";app/components/ChatInput.tsx (1)
118-133: Hotkey enablement doesn’t react to hideStop changes.Add hideStop to the deps so the binding toggles correctly when the prop changes.
useHotkeys( "ctrl+c", (e) => { e.preventDefault(); onStop(); }, { enabled: isGenerating && !hideStop, enableOnFormTags: true, enableOnContentEditable: true, preventDefault: true, description: "Stop AI generation", }, - [isGenerating, onStop], + [isGenerating, hideStop, onStop], );
🧹 Nitpick comments (16)
lib/actions/index.ts (2)
58-61: Tighten schema: trim and bound length.Pre-trim and add basic length bounds to reduce garbage titles.
- schema: z.object({ - title: z.string().describe("The generated title (3-5 words)"), - }), + schema: z.object({ + title: z.string().trim().min(1).max(60).describe("The generated title (3-5 words)"), + }),
70-71: Normalize the final title (optional).Clamp whitespace and optionally cap at 5 words to honor the 3–5 words guideline without hard-failing schema parsing.
- return title; + const normalizedTitle = title.split(/\s+/).slice(0, 5).join(" "); + return normalizedTitle;app/hooks/useChatHandlers.ts (2)
75-75: Also clear streamed parts on manual user actions (optional).You disable auto-resume, but stale data parts may remain. Consider clearing the stream buffer too.
- const { setIsAutoResuming } = useDataStream(); + const { setIsAutoResuming, setDataStream } = useDataStream();const handleSubmit = async (e: React.FormEvent) => { e.preventDefault(); - setIsAutoResuming(false); + setIsAutoResuming(false); + setDataStream([]);const handleStop = async () => { - setIsAutoResuming(false); + setIsAutoResuming(false); + setDataStream([]);const handleRegenerate = async () => { - setIsAutoResuming(false); + setIsAutoResuming(false); + setDataStream([]);const handleRetry = async () => { - setIsAutoResuming(false); + setIsAutoResuming(false); + setDataStream([]);- const handleEditMessage = async (messageId: string, newContent: string) => { + const handleEditMessage = async (messageId: Id<"messages">, newContent: string) => { setIsAutoResuming(false); + setDataStream([]);Also applies to: 157-157, 180-180, 214-214, 235-235
234-236: Type messageId as Id<'messages'> to avoid casts.This matches the Convex type and removes the later cast.
-const handleEditMessage = async (messageId: string, newContent: string) => { +const handleEditMessage = async (messageId: Id<"messages">, newContent: string) => {app/components/ChatInput.tsx (1)
222-222: Consider disabling attachments while generating (optional).Prevents attaching files mid-stream which won’t be sent until the next turn.
- <AttachmentButton onAttachClick={handleAttachClick} /> + <AttachmentButton + onAttachClick={handleAttachClick} + disabled={isGenerating} + />app/hooks/useAutoResume.ts (1)
24-36: One-shot guard is good; consider resetting on chat switch.If
initialMessagesswitches to a different chat,hasAttemptedRefwon’t reset. If that’s intended, ignore; otherwise reset wheninitialMessages[0]?.idor the route chat id changes.app/api/chat/route.ts (1)
39-58: Global context init: consider idempotent logging.Minor nit: the “missing REDIS_URL” log will print per cold start. If noisy, gate with a module flag.
app/components/DataStreamProvider.tsx (1)
20-31: Unbounded growth risk: dataStream accumulates every DataUIPart.This list can grow large and re-render consumers on every chunk. Either store only the latest part or cap the buffer.
- const [dataStream, setDataStream] = useState<DataUIPart<any>[]>([]); + const [dataStream, setDataStream] = useState<DataUIPart<any>[]>([]); + // Optionally expose a helper to cap the buffer (used by Chat.onData) + const pushPart = (part: DataUIPart<any>, max = 64) => + setDataStream((ds) => (ds.length >= max ? [...ds.slice(-max + 1), part] : [...ds, part])); const value = useMemo( () => ({ - dataStream, - setDataStream, + dataStream, + setDataStream, + // pushPart is optional to use + // @ts-ignore - include in context if you choose to adopt + pushPart, isAutoResuming, setIsAutoResuming, }), - [dataStream, isAutoResuming], + [dataStream, isAutoResuming], );app/components/chat.tsx (3)
154-158: Limit stored stream parts and avoid unnecessary re-renders.Push only what you need and cap growth to prevent memory churn.
- onData: (dataPart) => { - setDataStream((ds) => (ds ? [...ds, dataPart] : [])); + onData: (dataPart) => { + // Keep only a small sliding window to support auto-resume hooks + setDataStream((ds) => { + const next = (ds ?? []); + return next.length >= 64 ? [...next.slice(-63), dataPart] : [...next, dataPart]; + }); if (dataPart.type === "data-title") setChatTitle((dataPart.data as { chatTitle: string }).chatTitle); },
183-197: Clear dataStream on finish/error to drop stale chunks.Prevents the auto-resume hook from reprocessing old parts on the next run.
- onFinish: () => { + onFinish: () => { + setDataStream([]); setIsAutoResuming(false); // For new chats, flip the state so it becomes an existing chat const isTemporaryChat = !isExistingChatRef.current && temporaryChatsEnabledRef.current; if (!isExistingChatRef.current && !isTemporaryChat) { setIsExistingChat(true); } }, - onError: (error) => { + onError: (error) => { + setDataStream([]); setIsAutoResuming(false); if (error instanceof ChatSDKError && error.type !== "rate_limit") { toast.error(error.message); } },
32-38: Prop API change looks good.New
autoResumeprop is clear. Consider defaulting internally (autoResume ??= Boolean(routeChatId)) to reduce call-site boilerplate.export const Chat = ({ chatId: routeChatId, autoResume, }: { chatId?: string; autoResume: boolean; }) => { + autoResume = autoResume ?? Boolean(routeChatId);app/api/chat/[id]/stream/route.ts (5)
1-8: Declare runtime and disable static optimization for SSE route.Being explicit avoids accidental edge/runtime regressions and caching around SSE.
Apply:
import type { NextRequest } from "next/server"; import { createUIMessageStream, JsonToSseTransformStream } from "ai"; import { ChatSDKError } from "@/lib/errors"; import type { ChatMessage } from "@/types/chat"; import { getStreamContext } from "../../route"; import { ConvexHttpClient } from "convex/browser"; import { api } from "@/convex/_generated/api"; +export const runtime = "nodejs"; +export const dynamic = "force-dynamic";
58-66: Consider whether empty streams should remain open.The emptyDataStream never writes or closes, so clients will hang “waiting”. If the desired behavior is a short‑lived “no content” handshake, call writer.close() once mounted; otherwise, leave as‑is. Please confirm intended UX.
72-81: Avoid fetching full history if only the last assistant message is needed.If feasible, change the backend to return only the latest assistant message for the chat/user to cut payload and latency.
56-57: Type the stream id precisely.If active_stream_id can be null in the DB, prefer
string | nulland validate shape (e.g., cuid2/uuid) to reduce false positives.
21-23: Bad request code OK, but include a cause for observability.Add a cause string (“missing chat id”) so ChatSDKError logs are useful.
Apply:
- return new ChatSDKError("bad_request:api").toResponse(); + return new ChatSDKError("bad_request:api", "Missing chatId param").toResponse();
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
.env.local.example(1 hunks)app/api/chat/[id]/stream/route.ts(1 hunks)app/api/chat/route.ts(5 hunks)app/c/[id]/page.tsx(1 hunks)app/components/ChatInput.tsx(5 hunks)app/components/DataStreamProvider.tsx(1 hunks)app/components/chat.tsx(7 hunks)app/hooks/useAutoResume.ts(1 hunks)app/hooks/useChatHandlers.ts(7 hunks)app/layout.tsx(2 hunks)app/page.tsx(1 hunks)convex/chats.ts(5 hunks)convex/messages.ts(3 hunks)convex/schema.ts(2 hunks)lib/actions/index.ts(1 hunks)lib/db/actions.ts(1 hunks)package.json(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- package.json
🚧 Files skipped from review as they are similar to previous changes (2)
- convex/messages.ts
- convex/chats.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
app/c/[id]/page.tsxconvex/schema.tsapp/hooks/useChatHandlers.tsapp/api/chat/route.tslib/db/actions.tsapp/layout.tsxapp/components/chat.tsxapp/components/DataStreamProvider.tsxapp/page.tsxapp/api/chat/[id]/stream/route.tsapp/hooks/useAutoResume.tsapp/components/ChatInput.tsxlib/actions/index.ts
convex/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
convex/**/*.ts: Always use the new Convex function syntax (query/mutation/action objects with args/returns/handler) when defining Convex functions
When a function returns null, include returns: v.null() and return null explicitly
Use internalQuery/internalMutation/internalAction for private functions callable only by other Convex functions; do not expose sensitive logic via public query/mutation/action
Use query/mutation/action only for public API functions
Do not try to register functions via the api or internal objects
Always include argument and return validators for all Convex functions (query/internalQuery/mutation/internalMutation/action/internalAction)
In JS implementations, functions without an explicit return value implicitly return null
Use ctx.runQuery from queries/mutations/actions to call a query
Use ctx.runMutation from mutations/actions to call a mutation
Use ctx.runAction from actions to call an action
Only call an action from another action when crossing runtimes (e.g., V8 to Node); otherwise extract shared helper code
Minimize calls from actions to queries/mutations to avoid race conditions from splitting transactions
Pass FunctionReference values (from api/internal) to ctx.runQuery/ctx.runMutation/ctx.runAction; do not pass function implementations
When calling a function in the same file via ctx.run*, add an explicit return type annotation at the call site to avoid TS circularity
Use the generated api object for public functions and internal object for internal functions from convex/_generated/api.ts
Respect file-based routing for function references: e.g., convex/example.ts export f -> api.example.f; nested paths map to dot-separated namespaces
For paginated queries use paginationOptsValidator in args and .paginate(args.paginationOpts) on a query
v.bigint() is deprecated; use v.int64() for signed 64-bit integers
Use v.record(keys, values) for record-like data; v.map() and v.set() are not supported
For full-text search, use withSearchIndex("ind...
Files:
convex/schema.ts
convex/schema.ts
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
convex/schema.ts: Define the Convex schema in convex/schema.ts
Import schema definition functions (defineSchema, defineTable) from convex/server
Understand system fields: _id uses v.id(tableName) and _creationTime uses v.number(); they’re auto-added to all documents
Include all indexed fields in the index name (e.g., by_field1_and_field2 for ["field1","field2"])
Query indexes in the same column order as defined; create separate indexes for alternate orders
Files:
convex/schema.ts
🧬 Code graph analysis (10)
app/c/[id]/page.tsx (1)
app/components/chat.tsx (1)
Chat(32-540)
app/hooks/useChatHandlers.ts (1)
app/components/DataStreamProvider.tsx (1)
useDataStream(40-46)
app/api/chat/route.ts (3)
convex/chats.ts (1)
setActiveStreamId(238-268)lib/db/actions.ts (1)
setActiveStreamId(287-307)lib/actions/index.ts (1)
generateTitleFromUserMessageWithWriter(73-92)
lib/db/actions.ts (2)
convex/chats.ts (1)
setActiveStreamId(238-268)lib/errors.ts (1)
ChatSDKError(36-74)
app/layout.tsx (2)
app/components/DataStreamProvider.tsx (1)
DataStreamProvider(15-38)app/contexts/TodoBlockContext.tsx (1)
TodoBlockProvider(20-40)
app/components/chat.tsx (2)
app/components/DataStreamProvider.tsx (1)
useDataStream(40-46)app/hooks/useAutoResume.ts (1)
useAutoResume(15-50)
app/page.tsx (1)
app/components/chat.tsx (1)
Chat(32-540)
app/api/chat/[id]/stream/route.ts (4)
app/api/chat/route.ts (1)
getStreamContext(40-58)lib/errors.ts (1)
ChatSDKError(36-74)lib/auth/get-user-id.ts (1)
getUserID(13-31)types/chat.ts (1)
ChatMessage(66-66)
app/hooks/useAutoResume.ts (2)
types/chat.ts (1)
ChatMessage(66-66)app/components/DataStreamProvider.tsx (1)
useDataStream(40-46)
app/components/ChatInput.tsx (1)
app/components/AttachmentButton.tsx (1)
AttachmentButton(19-94)
🔇 Additional comments (15)
lib/actions/index.ts (1)
78-78: Signature change LGTM — verified call sites updated.generateTitleFromUserMessage is defined in lib/actions/index.ts and only invoked by generateTitleFromUserMessageWithWriter (lib/actions/index.ts:73–79) and from app/api/chat/route.ts (app/api/chat/route.ts:151–155); no callers pass an AbortSignal or extra args.
app/layout.tsx (3)
98-105: Provider order LGTM.Wrapping TodoBlockProvider and TooltipProvider with DataStreamProvider is sensible and non-breaking.
9-9: Import re-ordering is fine.
11-11: No action required — DataStreamProvider is a client component. app/components/DataStreamProvider.tsx starts with "use client", so hooks are safe when rendered from the server layout.app/components/ChatInput.tsx (2)
50-51: Prop addition LGTM.hideStop defaulting to false preserves existing behavior.
Also applies to: 61-62
298-343: Stop vs Send toggle looks solid.UI and hotkey gating on hideStop is consistent with the auto-resume UX.
app/hooks/useChatHandlers.ts (1)
14-15: Provider coverage confirmed — DataStreamProvider wraps the app layout
app/layout.tsx wraps children with ; useChatHandlers is only imported/used in app/components/chat.tsx, so all call-sites are covered..env.local.example (1)
43-45: Document expected REDIS_URL format and verify runtime usageAdd a short example connection string and TLS variant (e.g., redis://default:password@host:6379 and rediss://default:password@host:6380) and state whether TLS is required. Confirm the application actually reads process.env.REDIS_URL for resume streams (not a different env var); automated search returned no matches, so update .env.local.example or point to the file(s) that consume REDIS_URL.
app/page.tsx (1)
68-68: Confirm intent: disable auto-resume on the root page.app/page.tsx:68 —
return <Chat autoResume={false} />
app/c/[id]/page.tsx:26 —<Chat chatId={chatId} autoResume={true} />
Only these JSX usages pass autoResume explicitly. Confirm the asymmetry (root=false, /c/[id]=true) is intended; if not, make values consistent.app/components/chat.tsx (1)
459-459: Nice UX touch.Tying
hideStoptoisAutoResumingavoids accidental cancels during auto-resume.Also applies to: 485-485
convex/schema.ts (1)
38-38: New messages.user_id field — verified: writers populate it.
Server-side write paths set user_id; no schema-breaking writers found.
- convex/messages.ts: ctx.db.insert("messages", { user_id: ... }) (inserts at lines ~91 and ~293).
- lib/db/actions.ts: convex.mutation(api.messages.saveMessage, { ..., userId, ... }) (calls the mutation that supplies userId).
- app/api/chat/route.ts: passes userId into saveMessage before persisting messages.
app/api/chat/[id]/stream/route.ts (4)
52-54: Ownership check LGTM.Straightforward comparison against the authenticated user; consistent with error surfaces.
34-43: Do not pass secrets as function arguments to Convex.Even server‑side, passing service keys in args risks appearing in logs or error traces. Prefer authenticating the HTTP client with the service role key (header) or using a server‑side Convex client/action wrapper that injects credentials out‑of‑band.
Would you like a follow‑up patch to introduce a small server wrapper (e.g., lib/db/convexClient.ts) that attaches the Authorization header and exposes typed helpers (getChatById, getLatestAssistantMessage)?
Also applies to: 72-81
⛔ Skipped due to learnings
Learnt from: RostyslavManko PR: hackerai-tech/hackerai#14 File: convex/chats.ts:4-24 Timestamp: 2025-08-27T12:38:06.662Z Learning: Convex functions with serviceKey parameters are designed for backend service-to-service communication and don't require ctx.auth.getUserIdentity() checks. The serviceKey validation against process.env.CONVEX_SERVICE_ROLE_KEY provides sufficient authentication for these backend operations.Learnt from: CR PR: hackerai-tech/hackerai#0 File: .cursor/rules/convex_rules.mdc:0-0 Timestamp: 2025-08-29T13:33:09.937Z Learning: Applies to convex/**/*.ts : Use internalQuery/internalMutation/internalAction for private functions callable only by other Convex functions; do not expose sensitive logic via public query/mutation/actionLearnt from: CR PR: hackerai-tech/hackerai#0 File: .cursor/rules/convex_rules.mdc:0-0 Timestamp: 2025-08-29T13:33:09.937Z Learning: Applies to convex/**/*.ts : Use query/mutation/action only for public API functionsLearnt from: RostyslavManko PR: hackerai-tech/hackerai#17 File: convex/fileStorage.ts:131-149 Timestamp: 2025-09-01T12:44:12.626Z Learning: In convex/fileStorage.ts, serviceKey parameters are made optional (v.optional(v.string())) for queries like getFileTokensByFileIds to simplify local development workflow - developers don't need to configure service keys in the Convex dashboard when working locally. In production, the serviceKey is always provided, ensuring security. This pattern balances developer experience with production security requirements.Learnt from: CR PR: hackerai-tech/hackerai#0 File: .cursor/rules/convex_rules.mdc:0-0 Timestamp: 2025-08-29T13:33:09.937Z Learning: Applies to convex/**/*.ts : Always include argument and return validators for all Convex functions (query/internalQuery/mutation/internalMutation/action/internalAction)Learnt from: CR PR: hackerai-tech/hackerai#0 File: .cursor/rules/convex_rules.mdc:0-0 Timestamp: 2025-08-29T13:33:09.937Z Learning: Applies to convex/**/*.ts : Always use the new Convex function syntax (query/mutation/action objects with args/returns/handler) when defining Convex functions
92-101: Verify client expects "data-appendMessage"Repository search returned no consumers; confirm the frontend listens for the SSE event "data-appendMessage" or align the server/client to the same event name (e.g., "appendMessage"/"append-message").
Location: app/api/chat/[id]/stream/route.ts (lines 92–101)
25-33: Auth flow LGTM — verify uniform 'unauthorized:chat' mappingDynamic import and error mapping match getUserID; I couldn't scan the repo here — please run locally to confirm all call sites map unauthorized errors to "unauthorized:chat":
# find literal occurrences rg -n --hidden --no-ignore -F 'unauthorized:chat' -C2 || git grep -n --no-color 'unauthorized:chat' || true # find explicit ChatSDKError constructors rg -nP --hidden --no-ignore -C2 "new\s+ChatSDKError\(\s*['\"]unauthorized:chat['\"]\s*\)" || true
| export async function GET( | ||
| req: NextRequest, | ||
| { params }: { params: Promise<{ id: string }> }, | ||
| ) { | ||
| const { id: chatId } = await params; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Next.js route params typing; avoid Promise, use Id<'chats'>, and drop unnecessary await.
Next route handlers receive a plain object, not a Promise. Also, per repo guidelines, use the Convex Id helper type.
Apply:
+import type { Id } from "@/convex/_generated/dataModel";
export async function GET(
req: NextRequest,
- { params }: { params: Promise<{ id: string }> },
+ { params }: { params: { id: Id<"chats"> } },
) {
- const { id: chatId } = await params;
+ const { id: chatId } = params;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function GET( | |
| req: NextRequest, | |
| { params }: { params: Promise<{ id: string }> }, | |
| ) { | |
| const { id: chatId } = await params; | |
| import type { Id } from "@/convex/_generated/dataModel"; | |
| export async function GET( | |
| req: NextRequest, | |
| { params }: { params: { id: Id<"chats"> } }, | |
| ) { | |
| const { id: chatId } = params; |
🤖 Prompt for AI Agents
In app/api/chat/[id]/stream/route.ts around lines 9–13, the route handler types
params as a Promise and awaits it; change the handler signature to accept a
plain params object typed with Id<'chats'> (e.g. { params: { id: Id<'chats'> }
}) and remove the unnecessary await when extracting the id. Also add the
appropriate import for the Id type if missing. This ensures Next.js route params
are typed correctly and the await is removed.
| const convex = new ConvexHttpClient(process.env.NEXT_PUBLIC_CONVEX_URL!); | ||
| const serviceKey = process.env.CONVEX_SERVICE_ROLE_KEY!; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard required Convex env vars and avoid non-null assertions.
Fail fast with a clear error if config is missing; non-null assertions can hide misconfig in prod.
Apply:
- const convex = new ConvexHttpClient(process.env.NEXT_PUBLIC_CONVEX_URL!);
- const serviceKey = process.env.CONVEX_SERVICE_ROLE_KEY!;
+ const convexUrl = process.env.NEXT_PUBLIC_CONVEX_URL;
+ const serviceKey = process.env.CONVEX_SERVICE_ROLE_KEY;
+ if (!convexUrl || !serviceKey) {
+ return new ChatSDKError("bad_request:api", "Missing Convex configuration").toResponse();
+ }
+ const convex = new ConvexHttpClient(convexUrl);Side note: Passing the service role key as a query argument risks leakage via logs/analytics. Prefer authenticating the HTTP client (e.g., service role via Authorization header or a server-side Convex client) instead of sending secrets as function args.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const convex = new ConvexHttpClient(process.env.NEXT_PUBLIC_CONVEX_URL!); | |
| const serviceKey = process.env.CONVEX_SERVICE_ROLE_KEY!; | |
| const convexUrl = process.env.NEXT_PUBLIC_CONVEX_URL; | |
| const serviceKey = process.env.CONVEX_SERVICE_ROLE_KEY; | |
| if (!convexUrl || !serviceKey) { | |
| return new ChatSDKError("bad_request:api", "Missing Convex configuration").toResponse(); | |
| } | |
| const convex = new ConvexHttpClient(convexUrl); |
🤖 Prompt for AI Agents
In app/api/chat/[id]/stream/route.ts around lines 34 to 36, remove the non-null
assertions on process.env and add explicit guards that check
NEXT_PUBLIC_CONVEX_URL and CONVEX_SERVICE_ROLE_KEY at startup and throw a clear,
descriptive error if either is missing; instantiate ConvexHttpClient only after
validating NEXT_PUBLIC_CONVEX_URL. Also stop passing the service role key as a
query argument — instead authenticate requests by setting an Authorization
header on the HTTP client or use a server-side Convex client that injects the
service key server-side so the secret is never sent as a URL parameter.
| try { | ||
| chat = await convex.query(api.chats.getChatById, { | ||
| serviceKey, | ||
| id: chatId, | ||
| }); | ||
| } catch { | ||
| return new ChatSDKError("not_found:chat").toResponse(); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Distinguish “not found” from backend/query failures.
Catching all errors and returning 404 masks real server issues. Only return 404 when the result is null; return 500 (or a typed internal error) on exceptions.
Apply:
- try {
- chat = await convex.query(api.chats.getChatById, {
- serviceKey,
- id: chatId,
- });
- } catch {
- return new ChatSDKError("not_found:chat").toResponse();
- }
+ try {
+ chat = await convex.query(api.chats.getChatById, { serviceKey, id: chatId });
+ } catch (error) {
+ console.error("getChatById failed:", error);
+ return new ChatSDKError("internal:api", "Failed to load chat").toResponse();
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| chat = await convex.query(api.chats.getChatById, { | |
| serviceKey, | |
| id: chatId, | |
| }); | |
| } catch { | |
| return new ChatSDKError("not_found:chat").toResponse(); | |
| } | |
| try { | |
| chat = await convex.query(api.chats.getChatById, { serviceKey, id: chatId }); | |
| } catch (error) { | |
| console.error("getChatById failed:", error); | |
| return new ChatSDKError("internal:api", "Failed to load chat").toResponse(); | |
| } |
🤖 Prompt for AI Agents
In app/api/chat/[id]/stream/route.ts around lines 39 to 47, the current code
catches all exceptions from convex.query and returns a 404 which hides
server/query failures; instead call convex.query without swallowing errors,
check the returned chat value and if it is null/undefined return new
ChatSDKError("not_found:chat").toResponse(), and wrap the query only to catch
real exceptions and return a 500/internal error response (e.g. new
ChatSDKError("internal:database").toResponse() or rethrow) so that only absent
chats produce 404 while query/backend failures produce a 500.
| if (recentStreamId) { | ||
| const stream = await streamContext.resumableStream(recentStreamId, () => | ||
| emptyDataStream.pipeThrough(new JsonToSseTransformStream()), | ||
| ); | ||
|
|
||
| if (stream) { | ||
| return new Response(stream, { status: 200 }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set proper SSE headers on all streaming responses.
Without these, some clients won’t treat the response as an EventSource stream and may buffer.
Apply:
+ const SSE_HEADERS = {
+ "Content-Type": "text/event-stream; charset=utf-8",
+ "Cache-Control": "no-cache, no-transform",
+ Connection: "keep-alive",
+ "X-Accel-Buffering": "no",
+ } as const;- return new Response(stream, { status: 200 });
+ return new Response(stream, { status: 200, headers: SSE_HEADERS });- return new Response(
- emptyDataStream.pipeThrough(new JsonToSseTransformStream()),
- { status: 200 },
- );
+ return new Response(
+ emptyDataStream.pipeThrough(new JsonToSseTransformStream()),
+ { status: 200, headers: SSE_HEADERS },
+ );- return new Response(
- restoredStream.pipeThrough(new JsonToSseTransformStream()),
- { status: 200 },
- );
+ return new Response(
+ restoredStream.pipeThrough(new JsonToSseTransformStream()),
+ { status: 200, headers: SSE_HEADERS },
+ );- return new Response(
- emptyDataStream.pipeThrough(new JsonToSseTransformStream()),
- { status: 200 },
- );
+ return new Response(
+ emptyDataStream.pipeThrough(new JsonToSseTransformStream()),
+ { status: 200, headers: SSE_HEADERS },
+ );Also applies to: 86-89, 102-105, 107-110
🤖 Prompt for AI Agents
app/api/chat/[id]/stream/route.ts around lines 62-70 (also apply same change at
86-89, 102-105, 107-110): the streaming responses are missing SSE headers;
update each Response that returns a stream to include standard SSE headers
(e.g., "Content-Type: text/event-stream", "Cache-Control: no-cache,
no-transform", "Connection: keep-alive") and optionally "X-Accel-Buffering: no"
for nginx, by passing a headers object into the Response constructor so clients
treat and do not buffer the stream.
| const restoredStream = createUIMessageStream<ChatMessage>({ | ||
| execute: ({ writer }) => { | ||
| writer.write({ | ||
| type: "data-appendMessage", | ||
| data: JSON.stringify(mostRecentMessage), | ||
| transient: true, | ||
| }); | ||
| }, | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Send structured data (not pre‑stringified) and close the writer; otherwise the client may receive double‑encoded JSON and a never‑ending stream.
JsonToSseTransformStream serializes for you; also close the writer after replay.
Apply:
const restoredStream = createUIMessageStream<ChatMessage>({
- execute: ({ writer }) => {
- writer.write({
- type: "data-appendMessage",
- data: JSON.stringify(mostRecentMessage),
- transient: true,
- });
- },
+ execute: ({ writer }) => {
+ writer.write({
+ type: "data-appendMessage",
+ data: mostRecentMessage,
+ transient: true,
+ });
+ writer.close();
+ },
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const restoredStream = createUIMessageStream<ChatMessage>({ | |
| execute: ({ writer }) => { | |
| writer.write({ | |
| type: "data-appendMessage", | |
| data: JSON.stringify(mostRecentMessage), | |
| transient: true, | |
| }); | |
| }, | |
| }); | |
| const restoredStream = createUIMessageStream<ChatMessage>({ | |
| execute: ({ writer }) => { | |
| writer.write({ | |
| type: "data-appendMessage", | |
| data: mostRecentMessage, | |
| transient: true, | |
| }); | |
| writer.close(); | |
| }, | |
| }); |
🤖 Prompt for AI Agents
In app/api/chat/[id]/stream/route.ts around lines 92 to 101, the restore stream
currently writes JSON.stringify(mostRecentMessage) and never closes the writer;
change it to write the structured object (data: mostRecentMessage) so
JsonToSseTransformStream can serialize it, and call writer.close() after writing
to terminate the replay stream cleanly.
| // Wrap the UI message stream as SSE | ||
| const sse = stream.pipeThrough(new JsonToSseTransformStream()); | ||
|
|
||
| // Create a resumable stream and persist the active stream id (non-temporary chats) | ||
| if (!temporary) { | ||
| const streamContext = getStreamContext(); | ||
| if (streamContext) { | ||
| const streamId = uuidv4(); | ||
| await setActiveStreamId({ chatId, activeStreamId: streamId }); | ||
| const body = await streamContext.resumableStream(streamId, () => sse); | ||
| return new Response(body); | ||
| } | ||
| } | ||
|
|
||
| // Temporary chats do not support resumption; return SSE directly | ||
| return new Response(sse); | ||
| } catch (error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add SSE headers to Response.
Explicit SSE headers improve compatibility and proxies’ behavior.
- const sse = stream.pipeThrough(new JsonToSseTransformStream());
+ const sse = stream.pipeThrough(new JsonToSseTransformStream());
// Create a resumable stream and persist the active stream id (non-temporary chats)
if (!temporary) {
const streamContext = getStreamContext();
if (streamContext) {
const streamId = uuidv4();
- await setActiveStreamId({ chatId, activeStreamId: streamId });
- const body = await streamContext.resumableStream(streamId, () => sse);
- return new Response(body);
+ try { await setActiveStreamId({ chatId, activeStreamId: streamId }); } catch (e) { console.warn("setActiveStreamId(set) failed:", e); }
+ const body = await streamContext.resumableStream(streamId, () => sse);
+ return new Response(body, {
+ headers: {
+ "Content-Type": "text/event-stream; charset=utf-8",
+ "Cache-Control": "no-cache, no-transform",
+ "Connection": "keep-alive",
+ },
+ });
}
}
- // Temporary chats do not support resumption; return SSE directly
- return new Response(sse);
+ // Temporary chats or when resumable context not available
+ return new Response(sse, {
+ headers: {
+ "Content-Type": "text/event-stream; charset=utf-8",
+ "Cache-Control": "no-cache, no-transform",
+ "Connection": "keep-alive",
+ },
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Wrap the UI message stream as SSE | |
| const sse = stream.pipeThrough(new JsonToSseTransformStream()); | |
| // Create a resumable stream and persist the active stream id (non-temporary chats) | |
| if (!temporary) { | |
| const streamContext = getStreamContext(); | |
| if (streamContext) { | |
| const streamId = uuidv4(); | |
| await setActiveStreamId({ chatId, activeStreamId: streamId }); | |
| const body = await streamContext.resumableStream(streamId, () => sse); | |
| return new Response(body); | |
| } | |
| } | |
| // Temporary chats do not support resumption; return SSE directly | |
| return new Response(sse); | |
| } catch (error) { | |
| // Wrap the UI message stream as SSE | |
| const sse = stream.pipeThrough(new JsonToSseTransformStream()); | |
| // Create a resumable stream and persist the active stream id (non-temporary chats) | |
| if (!temporary) { | |
| const streamContext = getStreamContext(); | |
| if (streamContext) { | |
| const streamId = uuidv4(); | |
| try { | |
| await setActiveStreamId({ chatId, activeStreamId: streamId }); | |
| } catch (e) { | |
| console.warn("setActiveStreamId(set) failed:", e); | |
| } | |
| const body = await streamContext.resumableStream(streamId, () => sse); | |
| return new Response(body, { | |
| headers: { | |
| "Content-Type": "text/event-stream; charset=utf-8", | |
| "Cache-Control": "no-cache, no-transform", | |
| "Connection": "keep-alive", | |
| }, | |
| }); | |
| } | |
| } | |
| // Temporary chats or when resumable context not available | |
| return new Response(sse, { | |
| headers: { | |
| "Content-Type": "text/event-stream; charset=utf-8", | |
| "Cache-Control": "no-cache, no-transform", | |
| "Connection": "keep-alive", | |
| }, | |
| }); |
🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 254 to 270, the SSE Responses lack
explicit SSE headers; update both Response() returns (the resumable branch
returning new Response(body) and the temporary branch returning new
Response(sse)) to include proper SSE headers (at minimum "Content-Type":
"text/event-stream", "Cache-Control": "no-cache", "Connection": "keep-alive") by
passing a headers object in the Response init; ensure you add the same headers
to both new Response(...) calls so proxies and clients treat the stream as SSE.
|
|
||
| <Authenticated> | ||
| <Chat chatId={chatId} /> | ||
| <Chat chatId={chatId} autoResume={true} /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Confirm all Chat usages pass the new required prop.
autoResume is now required on Chat. Please verify all call sites (e.g., app/page.tsx, any tests/stories) were updated to avoid runtime/TS errors.
🏁 Script executed:
#!/bin/bash
# Find Chat usages missing `autoResume=`
rg -nP --type=ts --type=tsx -C2 '\<Chat\s*\('
rg -nP --type=tsx -C2 '<Chat\b(?![^>]*autoResume=)'Length of output: 110
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python3 - <<'PY'
import os,sys,re
root='.'
exts=('.tsx','.ts','.jsx','.js','.mdx')
ignore_dirs=set(['node_modules','.git','dist','build','out','.next'])
occurrences=[]
for dirpath, dirnames, filenames in os.walk(root):
dirnames[:] = [d for d in dirnames if d not in ignore_dirs]
for fn in filenames:
if fn.endswith(exts):
path=os.path.join(dirpath,fn)
try:
with open(path, 'r', encoding='utf-8') as f:
s=f.read()
except Exception as e:
print(f"SKIP {path} : {e}", file=sys.stderr)
continue
# find <Chat ...> tags (handles multi-line attributes and quoted >)
idx=0
while True:
i = s.find('<Chat', idx)
if i == -1:
break
in_single = False
in_double = False
j = i
tag_end = -1
while j < len(s):
ch = s[j]
if ch == "'" and not in_double:
in_single = not in_single
j += 1
continue
if ch == '"' and not in_single:
in_double = not in_double
j += 1
continue
if ch == '>' and not in_single and not in_double:
tag_end = j
break
j += 1
if tag_end == -1:
substring = s[i:i+400]
else:
substring = s[i:tag_end+1]
line_num = s.count('\n', 0, i) + 1
has_auto = bool(re.search(r'\bautoResume\b', substring))
occurrences.append((path, line_num, 'jsx', has_auto))
idx = i + 1
# find Chat(...) calls
for m in re.finditer(r'\bChat\s*\(', s):
line_num = s.count('\n', 0, m.start()) + 1
occurrences.append((path, line_num, 'call', False))
# Print results
for path, line, kind, has in occurrences:
if kind == 'jsx':
print(f"{path}:{line}: <Chat ...> autoResume={'YES' if has else 'NO'}")
else:
print(f"{path}:{line}: Chat(...) call")
total_jsx = sum(1 for o in occurrences if o[2] == 'jsx')
missing = [o for o in occurrences if o[2] == 'jsx' and not o[3]]
total_calls = sum(1 for o in occurrences if o[2] == 'call')
print()
print(f"SUMMARY: total <Chat> occurrences: {total_jsx}, missing autoResume: {len(missing)}, total Chat(...) calls: {total_calls}")
if missing:
print()
print("FILES WITH MISSING autoResume:")
for path, line, kind, has in missing:
print(f"{path}:{line}")
PYLength of output: 1689
Add the required autoResume prop to every usage.
Files with missing autoResume (set true/false as appropriate):
./app/page.tsx:47
./app/api/chat/[id]/stream/route.ts:58
./app/api/chat/[id]/stream/route.ts:92
./app/components/ChatHeader.tsx:29
./app/components/Messages.tsx:30
./app/components/Sidebar.tsx:69
./app/components/Sidebar.tsx:106
./app/components/SidebarHistory.tsx:88
./app/components/ChatItem.tsx:33
./app/components/chat.tsx:377
./app/components/chat.tsx:455
./app/components/chat.tsx:480
./app/contexts/GlobalState.tsx:129
./app/hooks/useFeedback.ts:9
./app/hooks/useAutoResume.ts:11
./app/hooks/useAutoResume.ts:12
🤖 Prompt for AI Agents
In app/c/[id]/page.tsx around line 26, the <Chat /> usage must include the
required autoResume prop; update the JSX to pass autoResume (e.g.,
autoResume={true} or autoResume={false} depending on desired behavior) so the
component receives an explicit boolean value.
| const dataPart = dataStream[0]; | ||
| if (dataPart.type === "data-appendMessage") { | ||
| const message = JSON.parse(dataPart.data); | ||
| setMessages([...initialMessages, message]); | ||
| // First message arrived, we can allow Stop button again | ||
| setIsAutoResuming(false); | ||
| } | ||
| }, [dataStream, initialMessages, setMessages]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix parsing and latest-chunk logic; avoid reprocessing the first chunk repeatedly.
dataStream[0]reprocesses the first item on every update.JSON.parse(dataPart.data)will throw ifdatais already an object (as used elsewhere).
Process the latest part and use the object directly.
- const dataPart = dataStream[0];
- if (dataPart.type === "data-appendMessage") {
- const message = JSON.parse(dataPart.data);
- setMessages([...initialMessages, message]);
- // First message arrived, we can allow Stop button again
- setIsAutoResuming(false);
- }
+ const dataPart = dataStream[dataStream.length - 1];
+ if (dataPart?.type === "data-appendMessage") {
+ const message = dataPart.data as ChatMessage;
+ setMessages((prev) => (prev.length ? prev : [...initialMessages, message]));
+ setIsAutoResuming(false);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const dataPart = dataStream[0]; | |
| if (dataPart.type === "data-appendMessage") { | |
| const message = JSON.parse(dataPart.data); | |
| setMessages([...initialMessages, message]); | |
| // First message arrived, we can allow Stop button again | |
| setIsAutoResuming(false); | |
| } | |
| }, [dataStream, initialMessages, setMessages]); | |
| const dataPart = dataStream[dataStream.length - 1]; | |
| if (dataPart?.type === "data-appendMessage") { | |
| const message = dataPart.data as ChatMessage; | |
| setMessages((prev) => (prev.length ? prev : [...initialMessages, message])); | |
| setIsAutoResuming(false); | |
| } | |
| }, [dataStream, initialMessages, setMessages]); |
🤖 Prompt for AI Agents
In app/hooks/useAutoResume.ts around lines 42 to 49, the effect always
reprocesses dataStream[0] and blindly JSON.parses dataPart.data which fails if
data is already an object; change it to read the latest chunk (use
dataStream[dataStream.length - 1]) and use the data directly if it's an object,
otherwise JSON.parse when typeof dataPart.data === "string"; then setMessages
using that message and setIsAutoResuming(false) as before so the first chunk is
not reprocessed on every update.
| title: v.string(), | ||
| user_id: v.string(), | ||
| finish_reason: v.optional(v.string()), | ||
| active_stream_id: v.optional(v.string()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enable nullable active_stream_id to support clearing.
To reliably clear the field, allow null in addition to string (pair with client/server changes).
- active_stream_id: v.optional(v.string()),
+ active_stream_id: v.optional(v.union(v.string(), v.null())),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| active_stream_id: v.optional(v.string()), | |
| active_stream_id: v.optional(v.union(v.string(), v.null())), |
🤖 Prompt for AI Agents
In convex/schema.ts around line 10, the active_stream_id field is currently
defined as optional string only; to support clearing you must allow null as
well. Change the schema type so active_stream_id accepts string OR null (and
remains optional), e.g., replace the current v.optional(v.string()) with a union
that includes v.string() and v.null()/nullable equivalent per Convex typing.
After updating the schema, run type generation and update client/server code
paths to write null when clearing the field and to handle null reads
accordingly.
| export async function setActiveStreamId({ | ||
| chatId, | ||
| activeStreamId, | ||
| }: { | ||
| chatId: string; | ||
| activeStreamId: string | undefined; | ||
| }) { | ||
| try { | ||
| await convex.mutation(api.chats.setActiveStreamId, { | ||
| serviceKey, | ||
| chatId, | ||
| activeStreamId, | ||
| }); | ||
| return; | ||
| } catch (error) { | ||
| throw new ChatSDKError( | ||
| "bad_request:database", | ||
| "Failed to set active stream id", | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot clear active stream id by sending undefined (JSON strips it). Use explicit null.
undefined won’t be sent over the wire, so the Convex mutation never receives a signal to clear the field. Align on null to clear and update the Convex validator accordingly.
Apply this diff (client side) and update server/schema as noted in other comments:
-export async function setActiveStreamId({
- chatId,
- activeStreamId,
-}: {
- chatId: string;
- activeStreamId: string | undefined;
-}) {
+export async function setActiveStreamId({
+ chatId,
+ activeStreamId,
+}: {
+ chatId: string;
+ activeStreamId: string | null;
+}) {
try {
await convex.mutation(api.chats.setActiveStreamId, {
serviceKey,
chatId,
activeStreamId,
});
return;
} catch (error) {
throw new ChatSDKError(
"bad_request:database",
"Failed to set active stream id",
);
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function setActiveStreamId({ | |
| chatId, | |
| activeStreamId, | |
| }: { | |
| chatId: string; | |
| activeStreamId: string | undefined; | |
| }) { | |
| try { | |
| await convex.mutation(api.chats.setActiveStreamId, { | |
| serviceKey, | |
| chatId, | |
| activeStreamId, | |
| }); | |
| return; | |
| } catch (error) { | |
| throw new ChatSDKError( | |
| "bad_request:database", | |
| "Failed to set active stream id", | |
| ); | |
| } | |
| } | |
| export async function setActiveStreamId({ | |
| chatId, | |
| activeStreamId, | |
| }: { | |
| chatId: string; | |
| activeStreamId: string | null; | |
| }) { | |
| try { | |
| await convex.mutation(api.chats.setActiveStreamId, { | |
| serviceKey, | |
| chatId, | |
| activeStreamId, | |
| }); | |
| return; | |
| } catch (error) { | |
| throw new ChatSDKError( | |
| "bad_request:database", | |
| "Failed to set active stream id", | |
| ); | |
| } | |
| } |
🤖 Prompt for AI Agents
In lib/db/actions.ts around lines 287 to 307, the function currently allows
activeStreamId: string | undefined and sends undefined which is stripped by JSON
so the server never receives a clear signal; change the parameter to accept
string | null (or coerce undefined to null) and pass explicit null to
convex.mutation when the caller intends to clear the field, updating the
argument you send (activeStreamId: activeStreamId ?? null) so the wire contains
null; also update any related client types/call sites to use null for clearing
and ensure server/schema Convex validator expects null to clear the field.
Summary by CodeRabbit
New Features
Bug Fixes
Chores