Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
437 changes: 98 additions & 339 deletions app/actions.tsx

Large diffs are not rendered by default.

70 changes: 7 additions & 63 deletions app/api/chat/route.ts
Original file line number Diff line number Diff line change
@@ -1,68 +1,12 @@
import { NextResponse, NextRequest } from 'next/server';
import { saveChat, createMessage, NewChat, NewMessage } from '@/lib/actions/chat-db';
import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user';
// import { generateUUID } from '@/lib/utils'; // Assuming generateUUID is in lib/utils as per PR context - not needed for PKs
import { CoreMessage, streamText } from 'ai';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove unused import.

streamText is not used.

-import { CoreMessage, streamText } from 'ai';
+import { CoreMessage } from 'ai';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { CoreMessage, streamText } from 'ai';
import { CoreMessage } from 'ai';
🤖 Prompt for AI Agents
In app/api/chat/route.ts around line 1, the import currently brings in
streamText which is unused; remove streamText from the import statement so only
the actually used symbols (e.g., CoreMessage) are imported from 'ai' to
eliminate the unused import and any linter warnings.

import { researcher } from '@/lib/agents/researcher';

// This is a simplified POST handler. PR #533's version might be more complex,
// potentially handling streaming AI responses and then saving.
// For now, this focuses on the database interaction part.
export async function POST(request: NextRequest) {
try {
const userId = await getCurrentUserIdOnServer();
if (!userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
}
export const maxDuration = 30;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Run on Edge for lower latency streaming (optional).

Export runtime = 'edge' alongside maxDuration.

 export const maxDuration = 30;
+export const runtime = 'edge';
🤖 Prompt for AI Agents
In app/api/chat/route.ts around line 4, export the Next.js runtime for this
route to run on the Edge for lower-latency streaming by adding an export for
runtime = 'edge' alongside the existing maxDuration export; update the file to
include an exported runtime constant (e.g., export const runtime = 'edge') in
the same scope as export const maxDuration = 30 so the route will run on the
Edge runtime.


const body = await request.json();
export async function POST(req: Request) {
const { messages }: { messages: CoreMessage[] } = await req.json();

// Example: Distinguish between creating a new chat vs. adding a message to existing chat
// The actual structure of `body` would depend on client-side implementation.
// Let's assume a simple case: creating a new chat with an initial message.
const { title, initialMessageContent, role = 'user' } = body;
const result = await researcher(messages);

if (!initialMessageContent) {
return NextResponse.json({ error: 'Initial message content is required' }, { status: 400 });
}

const newChatData: NewChat = {
// id: generateUUID(), // Drizzle schema now has defaultRandom for UUIDs
userId: userId,
title: title || 'New Chat', // Default title if not provided
// createdAt: new Date(), // Handled by defaultNow() in schema
visibility: 'private', // Default visibility
};

// Use a transaction if creating chat and first message together
// For simplicity here, let's assume saveChat handles chat creation and returns ID, then we create a message.
// A more robust `saveChat` might create the chat and first message in one go.
// The `saveChat` in chat-db.ts is designed to handle this.

const firstMessage: Omit<NewMessage, 'chatId'> = {
// id: generateUUID(), // Drizzle schema now has defaultRandom for UUIDs
// chatId is omitted as it will be set by saveChat
userId: userId,
role: role as NewMessage['role'], // Ensure role type matches schema expectation
content: initialMessageContent,
// createdAt: new Date(), // Handled by defaultNow() in schema, not strictly needed here
};

// The saveChat in chat-db.ts is designed to take initial messages.
const savedChatId = await saveChat(newChatData, [firstMessage]);

if (!savedChatId) {
return NextResponse.json({ error: 'Failed to save chat' }, { status: 500 });
}

// Fetch the newly created chat and message to return (optional, but good for client)
// For now, just return success and the new chat ID.
return NextResponse.json({ message: 'Chat created successfully', chatId: savedChatId }, { status: 201 });

} catch (error) {
console.error('Error in POST /api/chat:', error);
let errorMessage = 'Internal Server Error';
if (error instanceof Error) {
errorMessage = error.message;
}
return NextResponse.json({ error: errorMessage }, { status: 500 });
}
return new Response(result.textStream);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix streaming response: use toAIStreamResponse() (compile blocker).

new Response(result.textStream) is typed as ReadableStream<string> → not assignable to BodyInit. Use the AI SDK helper.

Apply:

-  return new Response(result.textStream);
+  return result.toAIStreamResponse();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return new Response(result.textStream);
return result.toAIStreamResponse();
🤖 Prompt for AI Agents
In app/api/chat/route.ts around line 11, the code returns new
Response(result.textStream) which fails because ReadableStream<string> is not a
valid BodyInit; replace this with the AI SDK helper
toAIStreamResponse(result.textStream) to produce a proper streaming Response.
Import toAIStreamResponse from the AI SDK module used in the project (e.g., from
"ai" or your SDK entry) and return toAIStreamResponse(result.textStream) instead
of new Response(...).

}
Comment on lines +6 to 12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden input parsing and error handling.

Validate messages and return structured errors; guard researcher throws.

-export async function POST(req: Request) {
-  const { messages }: { messages: CoreMessage[] } = await req.json();
-
-  const result = await researcher(messages);
-
-  return new Response(result.textStream);
-}
+export async function POST(req: Request) {
+  try {
+    const body = await req.json().catch(() => ({}));
+    const messages = (body?.messages ?? []) as CoreMessage[];
+    if (!Array.isArray(messages)) {
+      return new Response(JSON.stringify({ error: 'Invalid payload: messages must be an array' }), {
+        status: 400,
+        headers: { 'content-type': 'application/json' },
+      });
+    }
+    const result = await researcher(messages);
+    return result.toAIStreamResponse();
+  } catch (err) {
+    console.error('POST /api/chat error:', err);
+    return new Response(JSON.stringify({ error: 'Internal Server Error' }), {
+      status: 500,
+      headers: { 'content-type': 'application/json' },
+    });
+  }
+}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In app/api/chat/route.ts around lines 6 to 12, the handler currently assumes
req.json() yields a valid messages array and researcher() never throws; update
to robustly parse and validate input and handle errors: wrap req.json() and
researcher(...) in try/catch, verify messages exists, is an array and each item
conforms to CoreMessage shape (or at minimum has required fields like role and
content), and return structured JSON error responses with appropriate HTTP
status codes (400 for invalid input, 500 for internal errors) and Content-Type:
application/json; when researcher returns an error or throws, catch it and
return a 500 JSON error with the error message, and ensure successful responses
return the text stream or payload in a JSON body or as the correct Response type
with appropriate headers.

9 changes: 3 additions & 6 deletions app/page.tsx
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import { Chat } from '@/components/chat'
import {nanoid } from 'nanoid'
import { AI } from './actions'

export const maxDuration = 60

Expand All @@ -9,10 +8,8 @@ import { MapDataProvider } from '@/components/map/map-data-context'
export default function Page() {
const id = nanoid()
return (
<AI initialAIState={{ chatId: id, messages: [] }}>
<MapDataProvider>
<Chat id={id} />
</MapDataProvider>
</AI>
<MapDataProvider>
<Chat id={id} />
</MapDataProvider>
Comment on lines +11 to +13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Stabilize chat session id to prevent resets across re-renders/navigation.

Generating nanoid on each server render creates a new thread id; messages won’t persist.

Option A: client wrapper with lazy init.

'use client'
import { useState } from 'react'
import { nanoid } from 'nanoid'
import { Chat } from '@/components/chat'

export function ChatWithStableId() {
  const [id] = useState(() => nanoid())
  return <Chat id={id} />
}

Then:

-    <MapDataProvider>
-      <Chat id={id} />
-    </MapDataProvider>
+    <MapDataProvider>
+      <ChatWithStableId />
+    </MapDataProvider>

Option B: derive id from route param (preferred for shareable sessions).

🤖 Prompt for AI Agents
In app/page.tsx around lines 11 to 13, the Chat id is currently generated per
server render causing session resets; stabilize the id by moving id generation
to the client or deriving it from route params: implement a client component
wrapper that uses useState(() => nanoid()) to create the id once and render
<Chat id={id} /> (or, preferably, read/derive the id from the route param so
sessions are shareable), then replace the server-side instantiation with that
client wrapper so re-renders/navigation do not recreate the id.

)
}
34 changes: 3 additions & 31 deletions app/search/[id]/page.tsx
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import { notFound, redirect } from 'next/navigation';
import { Chat } from '@/components/chat';
import { getChat, getChatMessages } from '@/lib/actions/chat'; // Added getChatMessages
import { AI } from '@/app/actions';
import { MapDataProvider } from '@/components/map/map-data-context';
import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user'; // For server-side auth
import type { AIMessage } from '@/lib/types'; // For AIMessage type
Expand Down Expand Up @@ -41,36 +40,9 @@ export default async function SearchPage({ params }: SearchPageProps) {
notFound();
}

// Fetch messages for the chat
const dbMessages: DrizzleMessage[] = await getChatMessages(chat.id);

// Transform DrizzleMessages to AIMessages
const initialMessages: AIMessage[] = dbMessages.map((dbMsg): AIMessage => {
return {
id: dbMsg.id,
role: dbMsg.role as AIMessage['role'], // Cast role, ensure AIMessage['role'] includes all dbMsg.role possibilities
content: dbMsg.content,
createdAt: dbMsg.createdAt ? new Date(dbMsg.createdAt) : undefined,
// 'type' and 'name' are not in the basic Drizzle 'messages' schema.
// These would be undefined unless specific logic is added to derive them.
// For instance, if a message with role 'tool' should have a 'name',
// or if some messages have a specific 'type' based on content or other flags.
// This mapping assumes standard user/assistant messages primarily.
};
});

return (
<AI
initialAIState={{
chatId: chat.id,
messages: initialMessages, // Use the transformed messages from the database
// isSharePage: true, // This was in PR#533, but share functionality is removed.
// If needed for styling or other logic, it can be set.
}}
>
<MapDataProvider>
<Chat id={id} />
</MapDataProvider>
</AI>
<MapDataProvider>
<Chat id={id} />
</MapDataProvider>
);
}
2 changes: 2 additions & 0 deletions build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
#!/bin/bash
env $(cat .env.local | xargs) ./node_modules/.bin/next build
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Load .env safely; fix word-splitting and useless cat (SC2046, SC2002).

Current command breaks on spaces/quotes and can leak/execute content. Source with allexport instead.

Apply:

- env $(cat .env.local | xargs) ./node_modules/.bin/next build
+set -euo pipefail
+if [[ -f .env.local ]]; then
+  set -o allexport
+  # shellcheck disable=SC1091
+  source .env.local
+  set +o allexport
+fi
+exec ./node_modules/.bin/next build
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 2-2: Quote this to prevent word splitting.

(SC2046)


[style] 2-2: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead.

(SC2002)

🤖 Prompt for AI Agents
In build.sh around line 2, the current env $(cat .env.local | xargs)
./node_modules/.bin/next build uses cat and xargs and word-splits quoted values
which is unsafe; instead enable automatic export, source the file directly, run
the build, and then disable auto-export. Replace that line with a sequence that
sets allexport (set -a), sources .env.local (.) to import variables safely
without word-splitting, runs ./node_modules/.bin/next build, and then turns off
allexport (set +a).

Binary file modified bun.lockb
Binary file not shown.
173 changes: 173 additions & 0 deletions components/ai-elements/reasoning.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
'use client';

import { useControllableState } from '@radix-ui/react-use-controllable-state';
import {
Collapsible,
CollapsibleContent,
CollapsibleTrigger,
} from '@/components/ui/collapsible';
import { cn } from '@/lib/utils/index';
import { BrainIcon, ChevronDownIcon } from 'lucide-react';
import type { ComponentProps } from 'react';
import { createContext, memo, useContext, useEffect, useState } from 'react';
import { Response } from './response';

type ReasoningContextValue = {
isStreaming: boolean;
isOpen: boolean;
setIsOpen: (open: boolean) => void;
duration: number;
};
Comment on lines +15 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Broaden setter type to accept functional updates

Expose setIsOpen as Dispatch<SetStateAction> for consistency with React setters.

-import type { ComponentProps } from 'react';
+import type { ComponentProps, Dispatch, SetStateAction } from 'react';
...
 type ReasoningContextValue = {
   isStreaming: boolean;
   isOpen: boolean;
-  setIsOpen: (open: boolean) => void;
+  setIsOpen: Dispatch<SetStateAction<boolean>>;
   duration: number;
 };

Also applies to: 22-30

🤖 Prompt for AI Agents
In components/ai-elements/reasoning.tsx around lines 15-20 (and also update the
related declaration around lines 22-30), change the setIsOpen type from a simple
(open: boolean) => void to the React setter type
Dispatch<SetStateAction<boolean>>; import the types if not present (either
import React or import { Dispatch, SetStateAction } from 'react') and replace
both occurrences so the context exposes a standard React setter that accepts
either a boolean or a functional update.


const ReasoningContext = createContext<ReasoningContextValue | null>(null);

const useReasoning = () => {
const context = useContext(ReasoningContext);
if (!context) {
throw new Error('Reasoning components must be used within Reasoning');
}
return context;
};

export type ReasoningProps = ComponentProps<typeof Collapsible> & {
isStreaming?: boolean;
open?: boolean;
defaultOpen?: boolean;
onOpenChange?: (open: boolean) => void;
duration?: number;
};

const AUTO_CLOSE_DELAY = 1000;
const MS_IN_S = 1000;

export const Reasoning = memo(
({
className,
isStreaming = false,
open,
defaultOpen = true,
onOpenChange,
duration: durationProp,
children,
...props
}: ReasoningProps) => {
const [isOpen, setIsOpen] = useControllableState({
prop: open,
defaultProp: defaultOpen,
onChange: onOpenChange,
});
const [duration, setDuration] = useControllableState({
prop: durationProp,
defaultProp: 0,
});

const [hasAutoClosedRef, setHasAutoClosedRef] = useState(false);
const [startTime, setStartTime] = useState<number | null>(null);

Comment on lines +64 to +66
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Rename misleading state var (not a ref)

The “Ref” suffix suggests useRef; this is state. Rename for clarity.

-    const [hasAutoClosedRef, setHasAutoClosedRef] = useState(false);
+    const [hasAutoClosed, setHasAutoClosed] = useState(false);
...
-      if (defaultOpen && !isStreaming && isOpen && !hasAutoClosedRef && duration > 0) {
+      if (defaultOpen && !isStreaming && isOpen && !hasAutoClosed && duration > 0) {
...
-          setHasAutoClosedRef(true);
+          setHasAutoClosed(true);
...
-    }, [isStreaming, isOpen, defaultOpen, duration, setIsOpen, hasAutoClosedRef]);
+    }, [isStreaming, isOpen, defaultOpen, duration, setIsOpen, hasAutoClosed]);

Also applies to: 81-86, 90-90

🤖 Prompt for AI Agents
In components/ai-elements/reasoning.tsx around lines 64-66 (and also update
occurrences at lines 81-86 and 90), rename the misleading state variable and
setter from hasAutoClosedRef / setHasAutoClosedRef to hasAutoClosed /
setHasAutoClosed to reflect that this is useState, not a ref; update all
references and usages throughout the file (including conditional checks,
effects, and any callbacks) to use the new names so TypeScript/React compiles
and semantics remain identical.

// Track duration when streaming starts and ends
useEffect(() => {
if (isStreaming) {
if (startTime === null) {
setStartTime(Date.now());
}
} else if (startTime !== null) {
setDuration(Math.ceil((Date.now() - startTime) / MS_IN_S));
setStartTime(null);
}
}, [isStreaming, startTime, setDuration]);

// Auto-open when streaming starts, auto-close when streaming ends (once only)
useEffect(() => {
if (defaultOpen && !isStreaming && isOpen && !hasAutoClosedRef) {
// Add a small delay before closing to allow user to see the content
const timer = setTimeout(() => {
setIsOpen(false);
setHasAutoClosedRef(true);
}, AUTO_CLOSE_DELAY);

return () => clearTimeout(timer);
}
}, [isStreaming, isOpen, defaultOpen, setIsOpen, hasAutoClosedRef]);
Comment on lines +80 to +90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Prevent unwanted auto-close when streaming never started

Auto-closing triggers even if no streaming occurred (defaultOpen + not streaming on mount). Gate on a completed stream (e.g., duration > 0) and include duration in deps.

Apply:

-    useEffect(() => {
-      if (defaultOpen && !isStreaming && isOpen && !hasAutoClosedRef) {
+    useEffect(() => {
+      if (defaultOpen && !isStreaming && isOpen && !hasAutoClosedRef && duration > 0) {
         // Add a small delay before closing to allow user to see the content
         const timer = setTimeout(() => {
           setIsOpen(false);
           setHasAutoClosedRef(true);
         }, AUTO_CLOSE_DELAY);

         return () => clearTimeout(timer);
       }
-    }, [isStreaming, isOpen, defaultOpen, setIsOpen, hasAutoClosedRef]);
+    }, [isStreaming, isOpen, defaultOpen, duration, setIsOpen, hasAutoClosedRef]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
useEffect(() => {
if (defaultOpen && !isStreaming && isOpen && !hasAutoClosedRef) {
// Add a small delay before closing to allow user to see the content
const timer = setTimeout(() => {
setIsOpen(false);
setHasAutoClosedRef(true);
}, AUTO_CLOSE_DELAY);
return () => clearTimeout(timer);
}
}, [isStreaming, isOpen, defaultOpen, setIsOpen, hasAutoClosedRef]);
useEffect(() => {
if (defaultOpen && !isStreaming && isOpen && !hasAutoClosedRef && duration > 0) {
// Add a small delay before closing to allow user to see the content
const timer = setTimeout(() => {
setIsOpen(false);
setHasAutoClosedRef(true);
}, AUTO_CLOSE_DELAY);
return () => clearTimeout(timer);
}
}, [isStreaming, isOpen, defaultOpen, duration, setIsOpen, hasAutoClosedRef]);
🤖 Prompt for AI Agents
In components/ai-elements/reasoning.tsx around lines 80 to 90, the auto-close
currently fires on mount when defaultOpen is true even if streaming never
started; update the effect to only auto-close when a completed stream exists by
adding a check that streamDuration (or the existing variable tracking streaming
length) is > 0, and include that streamDuration in the dependency array; keep
the timer/cleanup logic but gate the setTimeout behind defaultOpen &&
!isStreaming && isOpen && !hasAutoClosedRef && streamDuration > 0 so the panel
won't auto-close unless a stream actually ran.


const handleOpenChange = (newOpen: boolean) => {
setIsOpen(newOpen);
};

return (
<ReasoningContext.Provider
value={{ isStreaming, isOpen, setIsOpen, duration }}
>
<Collapsible
className={cn('not-prose mb-4', className)}
onOpenChange={handleOpenChange}
open={isOpen}
{...props}
>
{children}
</Collapsible>
</ReasoningContext.Provider>
);
}
);

export type ReasoningTriggerProps = ComponentProps<typeof CollapsibleTrigger>;

export const ReasoningTrigger = memo(
({ className, children, ...props }: ReasoningTriggerProps) => {
const { isStreaming, isOpen, duration } = useReasoning();

return (
<CollapsibleTrigger
className={cn(
'flex items-center gap-2 text-muted-foreground text-sm',
className
)}
{...props}
>
{children ?? (
<>
<BrainIcon className="size-4" />
{isStreaming || duration === 0 ? (
<p>Thinking...</p>
) : (
<p>
Thought for {duration} {duration === 1 ? 'second' : 'seconds'}
</p>
)}
<ChevronDownIcon
className={cn(
'size-4 text-muted-foreground transition-transform',
isOpen ? 'rotate-180' : 'rotate-0'
)}
/>
</>
)}
</CollapsibleTrigger>
);
}
);

export type ReasoningContentProps = ComponentProps<
typeof CollapsibleContent
> & {
children: string;
};

export const ReasoningContent = memo(
({ className, children, ...props }: ReasoningContentProps) => (
<CollapsibleContent
className={cn(
'mt-4 text-sm',
'data-[state=closed]:fade-out-0 data-[state=closed]:slide-out-to-top-2 data-[state=open]:slide-in-from-top-2 text-popover-foreground outline-none data-[state=closed]:animate-out data-[state=open]:animate-in',
className
)}
{...props}
>
<Response className="grid gap-2">{children}</Response>
</CollapsibleContent>
)
);

Reasoning.displayName = 'Reasoning';
ReasoningTrigger.displayName = 'ReasoningTrigger';
ReasoningContent.displayName = 'ReasoningContent';
22 changes: 22 additions & 0 deletions components/ai-elements/response.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
'use client';

import { cn } from '@/lib/utils/index';
import { type ComponentProps, memo } from 'react';
import { Streamdown } from 'streamdown';

type ResponseProps = ComponentProps<typeof Streamdown>;

export const Response = memo(
({ className, ...props }: ResponseProps) => (
<Streamdown
className={cn(
'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
className
)}
Comment on lines +13 to +15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Tailwind compatibility nit: prefer w-full h-full for broader support.

size-full requires newer Tailwind; w-full h-full is universally safe.

-        'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
+        'w-full h-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
className
)}
'w-full h-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
className
)}
🤖 Prompt for AI Agents
In components/ai-elements/response.tsx around lines 13 to 15, the class uses the
Tailwind utility 'size-full' which requires a newer Tailwind version; replace it
with the broader-compatible utilities 'w-full h-full' to ensure consistent
sizing. Update the class list to use 'w-full h-full [&>*:first-child]:mt-0
[&>*:last-child]:mb-0', preserving the rest of the className merging logic.

{...props}
/>
),
(prevProps, nextProps) => prevProps.children === nextProps.children
);
Comment on lines +9 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Custom memo comparator skips updates when props (e.g., className, components) change.

Restricting re-render to children only can produce stale UI.

Use default shallow compare:

-export const Response = memo(
-  ({ className, ...props }: ResponseProps) => (
+export const Response = memo(
+  ({ className, ...props }: ResponseProps) => (
     <Streamdown
       className={cn(
-        'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
+        'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
         className
       )}
       {...props}
     />
-  ),
-  (prevProps, nextProps) => prevProps.children === nextProps.children
-);
+  )
+);

If you must customize, include all relevant props in the comparison.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const Response = memo(
({ className, ...props }: ResponseProps) => (
<Streamdown
className={cn(
'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
className
)}
{...props}
/>
),
(prevProps, nextProps) => prevProps.children === nextProps.children
);
export const Response = memo(
({ className, ...props }: ResponseProps) => (
<Streamdown
className={cn(
'size-full [&>*:first-child]:mt-0 [&>*:last-child]:mb-0',
className
)}
{...props}
/>
)
);
🤖 Prompt for AI Agents
In components/ai-elements/response.tsx around lines 9 to 20, the custom memo
comparator only compares children so updates to other props like className or
components are skipped and UI can become stale; replace the custom comparator
with React.memo's default shallow comparison by removing the second argument
entirely, or if a custom comparator is required, compare all relevant props
(e.g., children, className, components and any other props spread via ...props)
to ensure re-renders occur when any of those change.


Response.displayName = 'Response';
Loading