-
Notifications
You must be signed in to change notification settings - Fork 51
Daily branch 2025 09 10 #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. WalkthroughThreaded isPro and per-plan token limits through chat, memory, truncation, and rate-limit paths; replaced provider usage with a PostHog-traced provider factory; removed two AI tools and related local-file/multi-edit utilities; added client PostHog provider and deps; removed AI_GATEWAY_API_KEY setup. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API as POST /api/chat
participant Auth as Auth/DB
participant DB as DB actions
participant ProvFactory as createTrackedProvider
participant System as systemPrompt
participant Processor as processChatMessages
participant PostHog
Client->>API: POST /chat (message, chatId, mode)
API->>Auth: validate token → userId, isPro
API->>DB: getMessagesByChatId(chatId, userId, newMessages, isPro)
API->>ProvFactory: createTrackedProvider(userId, chatId, isPro)
ProvFactory->>PostHog: (optional) wrap models with tracing & metadata
API->>System: systemPrompt(userId, isPro, mode)
API->>Processor: processChatMessages({ messages, mode })
Processor->>Client: stream response chunks
Processor->>PostHog: emit tool-call events (when traced)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
✨ Finishing Touches
🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
lib/utils/file-token-utils.ts (1)
41-42: Use Id<'files'> in Record key type to satisfy repo typing rules.Current return type uses
Record<string, number>. Per guidelines, prefer branded IDs.Apply:
-export async function getFileTokensByIds( - fileIds: Id<"files">[], -): Promise<Record<string, number>> { +export async function getFileTokensByIds( + fileIds: Id<"files">[], +): Promise<Record<Id<"files">, number>> { @@ - const fileTokenMap: Record<string, number> = {}; + const fileTokenMap: Record<Id<"files">, number> = {} as const as Record<Id<"files">, number>; for (let i = 0; i < fileIds.length; i++) { fileTokenMap[fileIds[i]] = tokens[i] || 0; }Note:
truncateMessagesToTokenLimitacceptsRecord<string, number>. SinceId<'files'>extendsstring, this remains assignable at call sites.Also applies to: 53-56
lib/db/actions.ts (1)
169-181: AddisProparam: OK. Also switchchatId/userIdtypes to ConvexId<>.Per repo guidelines, avoid plain
stringfor document IDs.Apply:
+import { Id } from "@/convex/_generated/dataModel"; @@ -export async function getMessagesByChatId({ - chatId, - userId, +export async function getMessagesByChatId({ + chatId, + userId, newMessages, regenerate, isPro, }: { - chatId: string; - userId: string; + chatId: Id<"chats">; + userId: Id<"users">; newMessages: UIMessage[]; regenerate?: boolean; isPro: boolean; }) {Note: Adjust callers (e.g., route.ts) to pass typed IDs.
app/api/chat/route.ts (1)
57-66: Type request body IDs with ConvexId<>to satisfy repo rules.Update
chatIdanduserIdusages accordingly.Apply:
-import { NextRequest } from "next/server"; +import { NextRequest } from "next/server"; +import type { Id } from "@/convex/_generated/dataModel"; @@ - }: { - messages: UIMessage[]; - mode: ChatMode; - chatId: string; + }: { + messages: UIMessage[]; + mode: ChatMode; + chatId: Id<"chats">; todos?: Todo[]; regenerate?: boolean; } = await req.json(); @@ - const { userId, isPro } = await getUserIDAndPro(req); + const { userId, isPro } = await getUserIDAndPro(req); // ensure return type is Id<"users">If
getUserIDAndProreturnsstring, narrow its return type toId<'users'>at the source.Also applies to: 68-76
🧹 Nitpick comments (2)
convex/memories.ts (1)
7-7: Update function comment to reflect the new token limits.The comment should specify the exact token values for clarity.
- * Enforces token limit based on user plan (10k for pro, 5k for free) + * Enforces token limit based on user plan (10,000 for pro, 5,000 for free)lib/utils/file-token-utils.ts (1)
18-19: Tighten part typing to avoidanyand unsafe casts.Use the discriminated union for file parts to eliminate
(part as any).Example:
-import { UIMessagePart } from "ai"; +import type { UIMessagePart } from "ai"; export function extractFileIdsFromParts( - parts: UIMessagePart<any, any>[], + parts: Array<UIMessagePart>, ): Id<"files">[] { @@ - if (part.type === "file") { - // Check if fileId exists directly - if ((part as any).fileId) { - fileIds.push((part as any).fileId as Id<"files">); - } - } + if (part.type === "file" && "fileId" in part && part.fileId) { + fileIds.push(part.fileId as Id<"files">); + }Also applies to: 22-29
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (15)
.env.local.example(0 hunks)app/api/chat/route.ts(4 hunks)app/components/ChatInput.tsx(2 hunks)app/hooks/useChatHandlers.ts(3 hunks)app/layout.tsx(2 hunks)app/providers.tsx(1 hunks)convex/memories.ts(2 hunks)lib/ai/providers.ts(1 hunks)lib/chat/chat-processor.ts(0 hunks)lib/db/actions.ts(3 hunks)lib/system-prompt.ts(2 hunks)lib/token-utils.ts(3 hunks)lib/utils/file-token-utils.ts(2 hunks)package.json(2 hunks)scripts/setup.ts(0 hunks)
💤 Files with no reviewable changes (3)
- .env.local.example
- scripts/setup.ts
- lib/chat/chat-processor.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
app/layout.tsxapp/providers.tsxconvex/memories.tslib/ai/providers.tsapp/components/ChatInput.tsxapp/hooks/useChatHandlers.tslib/token-utils.tslib/system-prompt.tslib/utils/file-token-utils.tslib/db/actions.tsapp/api/chat/route.ts
convex/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
convex/**/*.ts: Always use the new Convex function syntax (query/mutation/action objects with args/returns/handler) when defining Convex functions
When a function returns null, include returns: v.null() and return null explicitly
Use internalQuery/internalMutation/internalAction for private functions callable only by other Convex functions; do not expose sensitive logic via public query/mutation/action
Use query/mutation/action only for public API functions
Do not try to register functions via the api or internal objects
Always include argument and return validators for all Convex functions (query/internalQuery/mutation/internalMutation/action/internalAction)
In JS implementations, functions without an explicit return value implicitly return null
Use ctx.runQuery from queries/mutations/actions to call a query
Use ctx.runMutation from mutations/actions to call a mutation
Use ctx.runAction from actions to call an action
Only call an action from another action when crossing runtimes (e.g., V8 to Node); otherwise extract shared helper code
Minimize calls from actions to queries/mutations to avoid race conditions from splitting transactions
Pass FunctionReference values (from api/internal) to ctx.runQuery/ctx.runMutation/ctx.runAction; do not pass function implementations
When calling a function in the same file via ctx.run*, add an explicit return type annotation at the call site to avoid TS circularity
Use the generated api object for public functions and internal object for internal functions from convex/_generated/api.ts
Respect file-based routing for function references: e.g., convex/example.ts export f -> api.example.f; nested paths map to dot-separated namespaces
For paginated queries use paginationOptsValidator in args and .paginate(args.paginationOpts) on a query
v.bigint() is deprecated; use v.int64() for signed 64-bit integers
Use v.record(keys, values) for record-like data; v.map() and v.set() are not supported
For full-text search, use withSearchIndex("ind...
Files:
convex/memories.ts
package.json
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
Add @types/node to devDependencies when using Node.js built-in modules
Files:
package.json
🧬 Code graph analysis (8)
app/layout.tsx (2)
app/providers.tsx (1)
PostHogProvider(7-19)app/contexts/TodoBlockContext.tsx (1)
TodoBlockProvider(20-40)
lib/ai/providers.ts (1)
app/posthog.js (1)
PostHogClient(3-17)
app/components/ChatInput.tsx (1)
lib/token-utils.ts (3)
countInputTokens(132-142)MAX_TOKENS_PRO(4-4)MAX_TOKENS_FREE(5-5)
app/hooks/useChatHandlers.ts (1)
lib/token-utils.ts (3)
countInputTokens(132-142)MAX_TOKENS_PRO(4-4)MAX_TOKENS_FREE(5-5)
lib/system-prompt.ts (1)
lib/db/actions.ts (1)
getMemories(242-260)
lib/utils/file-token-utils.ts (1)
lib/token-utils.ts (1)
truncateMessagesToTokenLimit(54-80)
lib/db/actions.ts (1)
lib/utils/file-token-utils.ts (1)
truncateMessagesWithFileTokens(93-105)
app/api/chat/route.ts (4)
lib/chat/chat-processor.ts (1)
processChatMessages(69-123)app/posthog.js (1)
PostHogClient(3-17)lib/ai/providers.ts (1)
createTrackedProvider(28-56)lib/system-prompt.ts (1)
systemPrompt(136-195)
🔇 Additional comments (21)
app/providers.tsx (1)
1-19: LGTM! Well-implemented PostHog provider setup.The implementation correctly follows PostHog best practices with client-side initialization, environment variable checks, and manual event capture configuration.
convex/memories.ts (1)
13-13: LGTM! Plan-based memory token limiting implemented correctly.The addition of the
isProparameter and plan-based token limits enhances the system's ability to provide differentiated service levels.Also applies to: 35-36
app/layout.tsx (1)
10-10: LGTM! PostHog provider integration is well-positioned.The placement of PostHogProvider between GlobalStateProvider and TodoBlockProvider ensures proper initialization order while maintaining the existing context hierarchy.
Also applies to: 96-103
lib/system-prompt.ts (1)
138-138: LGTM! System prompt now supports plan-aware memory retrieval.The addition of the
isProparameter and its propagation to thegetMemoriesfunction enables proper plan-based memory token limiting.Also applies to: 147-149
lib/token-utils.ts (3)
4-5: LGTM! Clear separation of token limits by plan.The new constants provide clear differentiation between Pro and Free plan token limits.
54-58: LGTM! Function signature updated for plan-based token limiting.The updated signature correctly implements plan-aware token limiting with sensible defaults.
Also applies to: 60-60
59-60: LGTM! Token limit calculation logic is correct.The implementation properly selects the appropriate token limit based on the user's plan status.
Also applies to: 73-73
app/components/ChatInput.tsx (2)
35-39: LGTM! Updated imports for plan-based token limits.The import changes correctly support the new per-plan token limit functionality.
144-154: LGTM! Plan-aware paste validation implemented correctly.The logic properly enforces different token limits based on the user's plan and provides clear, differentiated error messaging.
app/hooks/useChatHandlers.ts (2)
7-11: LGTM! Updated imports and state access for plan-based limits.The changes correctly import the new token constants and access the user's plan status from global state.
Also applies to: 49-49
71-81: LGTM! Plan-aware token validation with clear messaging.The implementation properly validates token limits based on the user's plan and provides appropriate error messaging that includes plan-specific context.
lib/utils/file-token-utils.ts (2)
52-56: Index-based ID→token mapping is fragile. Prefer backend-supplied mapping or explicit pairing.If
tokensordering ever diverges fromfileIds, counts will be misassigned.
- Confirm
api.fileStorage.getFileTokensByFileIdsguarantees order-aligned arrays.- If not, return a map
{ [fileId]: tokenCount }from the backend and consume it directly here.
90-96: All call sites updated; no stale references totruncateMessagesWithFileTokensremain.lib/ai/providers.ts (1)
33-37: Remove import verification: PostHog is already imported
app/posthog.js includesimport { PostHog } from "posthog-node";, so no additional import is needed.Likely an incorrect or invalid review comment.
lib/db/actions.ts (1)
218-221: Good: token truncation now respects plan and file tokens.app/api/chat/route.ts (4)
68-76: LGTM: isPro threaded into history retrieval/truncation.
122-126: Tracked provider usage looks correct.Wrapping models with PostHog tracing and selecting via
languageModel(selectedModel)is the right integration point.
135-144: Confirmed validprovider.sortvalues: The OpenRouter SDK supports"price","throughput", and"latency". Using"price"and"latency"here is correct.
99-101: PostHog client instantiation verified
app/posthog.jsimportsPostHogand returnsnullifprocess.env.NEXT_PUBLIC_POSTHOG_KEY/HOSTare unset; inapp/api/chat/route.tsthe client is called once and guarded withif (posthog)before any.capture()call—no changes required.package.json (2)
54-54: Guard posthog-js to client-only usage. Importposthog-jsexclusively in files with a"use client"directive—e.g. create a wrapper atlib/analytics/posthog-client.tsthat initializes and exports PostHog—and audit all API routes and server modules to ensure none importposthog-js.
21-21: Verify Node≥20 and PostHog env vars
- @posthog/ai@6.3.2 peerDependencies:
posthog-node@^5.0.0; engines require Node ≥ 20 (no React/Next constraints).- Ensure your dev/CI/prod runtime is using Node 20+.
- Confirm both
NEXT_PUBLIC_POSTHOG_KEYandNEXT_PUBLIC_POSTHOG_HOSTare defined (used inapp/providers.tsxandapp/posthog.js).
| const baseProviders = { | ||
| "ask-model": openrouter( | ||
| process.env.NEXT_PUBLIC_ASK_MODEL || "qwen/qwen3-coder", | ||
| ), | ||
| "agent-model": openrouter( | ||
| process.env.NEXT_PUBLIC_AGENT_MODEL || "qwen/qwen3-coder", | ||
| ), | ||
| "vision-model": openrouter( | ||
| process.env.NEXT_PUBLIC_VISION_MODEL || "google/gemini-2.5-flash", | ||
| ), | ||
| "vision-base64-model": openrouter( | ||
| process.env.NEXT_PUBLIC_VISION_BASE64_MODEL || "google/gemini-2.5-flash", | ||
| ), | ||
| "title-generator-model": openrouter( | ||
| process.env.NEXT_PUBLIC_TITLE_MODEL || "google/gemini-2.5-flash", | ||
| ), | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Annotate baseProviders as a typed record of models.
Aligns with “explicit generic types for records” guideline and improves IntelliSense.
Apply:
-import { customProvider } from "ai";
+import { customProvider, type LanguageModelV1 } from "ai";
@@
-const baseProviders = {
+const baseProviders: Record<string, LanguageModelV1> = {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const baseProviders = { | |
| "ask-model": openrouter( | |
| process.env.NEXT_PUBLIC_ASK_MODEL || "qwen/qwen3-coder", | |
| ), | |
| "agent-model": openrouter( | |
| process.env.NEXT_PUBLIC_AGENT_MODEL || "qwen/qwen3-coder", | |
| ), | |
| "vision-model": openrouter( | |
| process.env.NEXT_PUBLIC_VISION_MODEL || "google/gemini-2.5-flash", | |
| ), | |
| "vision-base64-model": openrouter( | |
| process.env.NEXT_PUBLIC_VISION_BASE64_MODEL || "google/gemini-2.5-flash", | |
| ), | |
| "title-generator-model": openrouter( | |
| process.env.NEXT_PUBLIC_TITLE_MODEL || "google/gemini-2.5-flash", | |
| ), | |
| }; | |
| // At the top of lib/ai/providers.ts | |
| import { customProvider, type LanguageModelV1 } from "ai"; | |
| // … any other imports … | |
| const baseProviders: Record<string, LanguageModelV1> = { | |
| "ask-model": openrouter( | |
| process.env.NEXT_PUBLIC_ASK_MODEL || "qwen/qwen3-coder", | |
| ), | |
| "agent-model": openrouter( | |
| process.env.NEXT_PUBLIC_AGENT_MODEL || "qwen/qwen3-coder", | |
| ), | |
| "vision-model": openrouter( | |
| process.env.NEXT_PUBLIC_VISION_MODEL || "google/gemini-2.5-flash", | |
| ), | |
| "vision-base64-model": openrouter( | |
| process.env.NEXT_PUBLIC_VISION_BASE64_MODEL || "google/gemini-2.5-flash", | |
| ), | |
| "title-generator-model": openrouter( | |
| process.env.NEXT_PUBLIC_TITLE_MODEL || "google/gemini-2.5-flash", | |
| ), | |
| }; |
🤖 Prompt for AI Agents
In lib/ai/providers.ts around lines 6 to 22, baseProviders is currently untyped;
add an explicit type annotation so the object is a Record of the known provider
keys to the openrouter return type. Define a union type for the provider keys
(e.g. "ask-model", "agent-model", "vision-model", "vision-base64-model",
"title-generator-model") and annotate baseProviders as Record<ThatUnion,
ReturnType<typeof openrouter>> (or equivalent), so IntelliSense and
type-checking know the exact keys and value shape.
| const trackedModels: Record<string, any> = {}; | ||
|
|
||
| Object.entries(baseProviders).forEach(([modelName, model]) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid any for trackedModels; type as LanguageModelV1.
Keeps provider surface strictly typed.
Apply:
- const trackedModels: Record<string, any> = {};
+ const trackedModels: Record<string, LanguageModelV1> = {};
@@
- return customProvider({
+ return customProvider({
languageModels: trackedModels,
});Also applies to: 53-56
🤖 Prompt for AI Agents
In lib/ai/providers.ts around lines 39-41 and 53-56, trackedModels is typed as
Record<string, any>; change it to Record<string, LanguageModelV1> (or
Map<string, LanguageModelV1> if preferred) and update any other trackedModels
declarations in the same region to use LanguageModelV1. Import or reference the
LanguageModelV1 type at the top of the file, and adjust any code that relies on
the previous any typing (e.g., method/property accesses or assignments) to
satisfy the stricter type so the provider surface remains strictly typed.
| export async function getMemories({ | ||
| userId, | ||
| isPro, | ||
| }: { | ||
| userId: string; | ||
| isPro: boolean; | ||
| }) { | ||
| try { | ||
| const memories = await convex.query(api.memories.getMemoriesForBackend, { | ||
| serviceKey, | ||
| userId, | ||
| isPro, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Propagating isPro into memories is correct; type userId as Id<'users'>.
Apply:
-export async function getMemories({
- userId,
- isPro,
-}: {
- userId: string;
- isPro: boolean;
-}) {
+export async function getMemories({
+ userId,
+ isPro,
+}: {
+ userId: Id<"users">;
+ isPro: boolean;
+}) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function getMemories({ | |
| userId, | |
| isPro, | |
| }: { | |
| userId: string; | |
| isPro: boolean; | |
| }) { | |
| try { | |
| const memories = await convex.query(api.memories.getMemoriesForBackend, { | |
| serviceKey, | |
| userId, | |
| isPro, | |
| }); | |
| export async function getMemories({ | |
| userId, | |
| isPro, | |
| }: { | |
| userId: Id<"users">; | |
| isPro: boolean; | |
| }) { | |
| try { | |
| const memories = await convex.query(api.memories.getMemoriesForBackend, { | |
| serviceKey, | |
| userId, | |
| isPro, | |
| }); |
🤖 Prompt for AI Agents
In lib/db/actions.ts around lines 242 to 254, the getMemories function's userId
parameter should be typed as Id<'users'> rather than string and isPro
propagation is already correct; update the function signature and parameter type
annotation to userId: Id<'users'> (add or update the necessary import for Id
from the Convex/typing module used in this project), and keep passing isPro
through to the convex.query call unchanged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/ai/tools/web.ts (1)
26-49: Enforce HTTPS URLs and command-specific required fields in inputSchema.Strengthen validation to prevent bad inputs and accidental open_url calls without a URL.
- url: z - .string() - .optional() + url: z + .string() + .url("Must be a valid URL") + .refine((u) => /^https?:\/\//i.test(u), "Only http(s) URLs are allowed") + .optional()Add this right after the
z.object({...})to require the right field per command:// append after the existing z.object({...}) .superRefine((data, ctx) => { if (data.command === "search" && !data.query) { ctx.addIssue({ code: z.ZodIssueCode.custom, path: ["query"], message: "query is required when command is 'search'" }); } if (data.command === "open_url" && !data.url) { ctx.addIssue({ code: z.ZodIssueCode.custom, path: ["url"], message: "url is required when command is 'open_url'" }); } })app/components/PricingDialog.tsx (1)
20-20: Remove unused useIsMobile import
PricingDialog.tsx no longer referencesuseIsMobile, so remove theimport { useIsMobile } from "@/hooks/use-mobile";.
🧹 Nitpick comments (5)
lib/ai/tools/web.ts (5)
59-62: Guard missing EXA_API_KEY early.Fail fast with a clear message instead of relying on downstream exceptions.
try { - const exa = new Exa(process.env.EXA_API_KEY); + if (!process.env.EXA_API_KEY) { + return "Error: EXA_API_KEY is not set in the environment"; + } + const exa = new Exa(process.env.EXA_API_KEY);
72-78: Make maxCharacters plan-aware and centralized.Align with the PR’s isPro propagation and avoid magic numbers.
- const searchOptions = { + const searchOptions = { type: "auto" as const, text: { - maxCharacters: 2000, + maxCharacters: maxSearchChars, }, ...(country && { userLocation: country }), };- result = await exa.searchAndContents(query, { - type: "auto", - text: { - maxCharacters: 2000, - }, - }); + result = await exa.searchAndContents(query, { + type: "auto" as const, + text: { maxCharacters: maxSearchChars }, + });Add near the top (after line 13) and destructure
isProfrom context:- const { userLocation } = context; + const { userLocation, isPro } = context; + const SEARCH_MAX_CHARS_FREE = 2000; + const SEARCH_MAX_CHARS_PRO = 4000; // tune if needed + const OPEN_URL_MAX_CHARS_FREE = 8000; + const OPEN_URL_MAX_CHARS_PRO = 12000; + const maxSearchChars = isPro ? SEARCH_MAX_CHARS_PRO : SEARCH_MAX_CHARS_FREE; + const maxOpenUrlChars = isPro ? OPEN_URL_MAX_CHARS_PRO : OPEN_URL_MAX_CHARS_FREE;Also applies to: 84-89
80-89: Surface the first failure for observability.Keep the silent retry, but log the original error for debugging.
} catch (firstError: any) { // Always retry without userLocation as fallback + console.warn( + "Exa search with userLocation failed; retrying without location:", + firstError?.message ?? firstError + ); result = await exa.searchAndContents(query, { type: "auto", text: { maxCharacters: 2000, }, });
98-101: Plan-aware open_url limit and reuse the centralized constant.Stay consistent with search limits and the PR’s token budgeting.
- const results = await exa.getContents([url], { - text: { maxCharacters: 12000 }, - }); + const results = await exa.getContents([url], { + text: { maxCharacters: maxOpenUrlChars }, + });
92-110: Normalize tool output shape.Currently mixes arrays (success) and strings (errors). Consider a consistent envelope to simplify callers.
- return result.results; + return { ok: true, results: result.results }; ... - return results.results; + return { ok: true, results: results.results }; ... - return "Error: Invalid command"; + return { ok: false, error: "Invalid command" }; ... - return `Error performing web operation: ${errorMessage}`; + return { ok: false, error: `Error performing web operation: ${errorMessage}` };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/components/PricingDialog.tsx(2 hunks)lib/ai/tools/web.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
lib/ai/tools/web.tsapp/components/PricingDialog.tsx
🧬 Code graph analysis (1)
app/components/PricingDialog.tsx (2)
app/contexts/GlobalState.tsx (1)
useGlobalState(344-350)app/hooks/useUpgrade.ts (1)
useUpgrade(5-73)
🔇 Additional comments (5)
lib/ai/tools/web.ts (1)
70-101: Exa-js option casing and URL support validated. The code correctly uses camelCasetext.maxCharacters, passesuserLocationas a country string, and supplies an array of URLs togetContents.app/components/PricingDialog.tsx (4)
124-124: LGTM! Simplified authentication state handling.Removing the
loadingstate fromuseAuth()simplifies the component and suggests that loading states are now handled elsewhere in the application flow.
126-126: LGTM! Streamlined upgrade hook usage.Removing
upgradeErrorfrom the hook return aligns with the local error handling approach inhandleUpgradeClick. The error state is now managed via console logging rather than UI display, which simplifies the component.
136-144: LGTM! Improved error handling approach.The error handling in
handleUpgradeClicknow uses console logging instead of relying on the hook's error state, which aligns with the removal ofupgradeErrorfrom the hook return. The try-catch structure properly handles upgrade failures.
260-260: Verify Pro plan pricing change impactCoordinate with marketing, billing systems, and existing customer communications before deploying the Pro plan price increase from $20 to $25.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/api/chat/route.ts (1)
38-40: Propagate request abort to controller to stop downstream work.Currently logs only. Also call controller.abort() so streamText/tooling can cancel promptly.
req.signal.addEventListener("abort", () => { console.log("Request aborted"); + controller.abort(); });
♻️ Duplicate comments (1)
lib/db/actions.ts (1)
240-246: Type userId as Id<'users'> (repeat).Same note as the past review: userId should be Id<'users'>.
+import type { Id } from "@/convex/_generated/dataModel"; export async function getMemories({ userId, isPro, }: { - userId: string; + userId: Id<"users">; isPro: boolean; }) {
🧹 Nitpick comments (4)
lib/db/actions.ts (1)
167-179: Propagate isPro: good; type IDs with Convex Id<>.LGTM on threading isPro. Per guidelines, use Id<'chats'> and Id<'users'> instead of string.
+import type { Id } from "@/convex/_generated/dataModel"; export async function getMessagesByChatId({ chatId, userId, newMessages, regenerate, isPro, }: { - chatId: string; - userId: string; + chatId: Id<"chats">; + userId: Id<"users">; newMessages: UIMessage[]; regenerate?: boolean; isPro: boolean; }) {lib/rate-limit.ts (1)
21-35: Harden env parsing for limits.Use radix 10 or Number(), and default if NaN to avoid unexpected NaN from malformed envs.
- requestLimit = parseInt( - process.env.AGENT_MODE_RATE_LIMIT_REQUESTS || "50", - ); + requestLimit = Number(process.env.AGENT_MODE_RATE_LIMIT_REQUESTS ?? 50) || 50; ... - requestLimit = isPro - ? parseInt(process.env.PRO_RATE_LIMIT_REQUESTS || "100") - : parseInt(process.env.FREE_RATE_LIMIT_REQUESTS || "10"); + requestLimit = isPro + ? Number(process.env.PRO_RATE_LIMIT_REQUESTS ?? 100) || 100 + : Number(process.env.FREE_RATE_LIMIT_REQUESTS ?? 10) || 10;app/api/chat/route.ts (2)
86-87: Rate-limit before persisting to avoid storing blocked requests.Move checkRateLimit earlier (before handleInitialChatAndUserMessage) so we don’t save chats/messages when over limit.
- // Get existing messages, merge with new messages, and truncate - const { truncatedMessages, chat, isNewChat } = await getMessagesByChatId({ + // Enforce rate limit early + await checkRateLimit(userId, isPro, mode); + + // Get existing messages, merge with new messages, and truncate + const { truncatedMessages, chat, isNewChat } = await getMessagesByChatId({ chatId, userId, newMessages: messages, regenerate, isPro, }); - - // Check rate limit for the user with mode - await checkRateLimit(userId, isPro, mode);
193-206: Avoid double-saving the user message on stream finish.Depending on the AI SDK, messages may include the last user message already saved earlier. Filter to only persist non-user roles.
- onFinish: async ({ messages }) => { - for (const message of messages) { - await saveMessage({ - chatId, - userId, - message, - }); - } - }, + onFinish: async ({ messages }) => { + for (const message of messages) { + if (message.role !== "user") { + await saveMessage({ + chatId, + userId, + message, + }); + } + } + },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.env.local.example(1 hunks)app/api/chat/route.ts(4 hunks)lib/db/actions.ts(3 hunks)lib/rate-limit.ts(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .env.local.example
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
lib/rate-limit.tslib/db/actions.tsapp/api/chat/route.ts
🧬 Code graph analysis (3)
lib/rate-limit.ts (1)
types/chat.ts (1)
ChatMode(4-4)
lib/db/actions.ts (1)
lib/utils/file-token-utils.ts (1)
truncateMessagesWithFileTokens(93-105)
app/api/chat/route.ts (6)
lib/rate-limit.ts (1)
checkRateLimit(7-92)lib/chat/chat-processor.ts (1)
processChatMessages(69-123)lib/db/actions.ts (1)
getUserCustomization(224-238)app/posthog.js (1)
PostHogClient(3-17)lib/ai/providers.ts (1)
createTrackedProvider(28-56)lib/system-prompt.ts (1)
systemPrompt(136-195)
🔇 Additional comments (7)
lib/db/actions.ts (2)
215-220: Truncation call aligns with new API.Passing isPro to truncateMessagesWithFileTokens matches its signature and intent.
248-252: Forwarding isPro to backend: good.Gates memory retrieval budget/selection by plan as intended.
lib/rate-limit.ts (1)
45-48: Mode-scoped keying: good.Using
${userId}:${mode}isolates buckets per mode; appropriate with the new branching.app/api/chat/route.ts (4)
74-75: isPro propagation to message fetch/truncation: good.Ensures plan-aware token budgeting end-to-end.
122-125: Tracked provider wiring: good.Switch to createTrackedProvider and model selection looks correct.
126-133: systemPrompt now plan-aware: good.Threading isPro into systemPrompt matches memory gating and token budgets.
135-144: No action required OpenRouter’s provider.sort accepts both “price” and “latency” via providerOptions.openrouter per the OpenRouter provider routing docs and Vercel AI SDK docs.
- Add handleRetry function that retries without deleting messages - Update MessageErrorState to use onRetry instead of onRegenerate - Pass handleRetry through Messages component to error state - Fixes issue where 'Try Again' button deleted conversation history
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
app/components/Messages.tsx (1)
32-33: Retry path wiring is correct; consider tightening messageId typeThe new onRetry prop is properly plumbed and used by the error UI.
Optional: Align with the repo guideline to use Id<'table'> for doc IDs by tightening onEditMessage’s signature now that you’re touching this area.
Apply within this file:
@@ interface MessagesProps { messages: ChatMessage[]; setMessages: Dispatch<SetStateAction<ChatMessage[]>>; onRegenerate: () => void; - onRetry: () => void; - onEditMessage: (messageId: string, newContent: string) => Promise<void>; + onRetry: () => void; + onEditMessage: (messageId: Id<"messages">, newContent: string) => Promise<void>; @@ export const Messages = ({ messages, setMessages, onRegenerate, onRetry,Add the missing type import at the top of this file:
import type { Id } from "@/convex/_generated/dataModel";Also applies to: 52-53, 335-336
app/hooks/useChatHandlers.ts (3)
71-79: Add a small headroom to token checks to avoid backend rejectionsReserve ~10% of the limit to account for system prompts/metadata so client-side checks better match server behavior.
- const maxTokens = hasProPlan ? MAX_TOKENS_PRO : MAX_TOKENS_FREE; - if (tokenCount > maxTokens) { + const maxTokens = hasProPlan ? MAX_TOKENS_PRO : MAX_TOKENS_FREE; + const budget = Math.floor(maxTokens * 0.9); // 10% headroom + if (tokenCount > budget) { const hasFiles = uploadedFiles.length > 0; const planText = hasProPlan ? "" : " (Free plan limit)"; toast.error("Message is too long", { - description: `Your message is too large (${tokenCount.toLocaleString()} tokens). Please make it shorter${hasFiles ? " or remove some files" : ""}${planText}.`, + description: `Your message is too large (${tokenCount.toLocaleString()} tokens). Please make it shorter${hasFiles ? " or remove some files" : ""}${planText}.`, }); return; }
172-180: Deduplicate regenerate/retry logic to a single helperBoth handlers send the same regenerate payload; only deletion of the last assistant message differs. Reduce duplication and the chance of drift.
- const handleRetry = async () => { - regenerate({ - body: { - mode, - todos, - regenerate: true, - }, - }); - }; + const handleRetry = async () => triggerRegenerate(false);And adjust handleRegenerate similarly:
- const handleRegenerate = async () => { - await deleteLastAssistantMessage({ chatId }); - - regenerate({ - body: { - mode, - todos, - regenerate: true, - }, - }); - }; + const handleRegenerate = async () => triggerRegenerate(true);Add this helper near these handlers:
const triggerRegenerate = async (deleteLastAssistant = false) => { if (deleteLastAssistant) { await deleteLastAssistantMessage({ chatId }); } regenerate({ body: { mode, todos, regenerate: true }, }); };
182-187: Use Id<'messages'> for messageId to remove castThis aligns with the repo guideline and eliminates the explicit cast inside the function.
- const handleEditMessage = async (messageId: string, newContent: string) => { + const handleEditMessage = async (messageId: Id<"messages">, newContent: string) => { await regenerateWithNewContent({ - messageId: messageId as Id<"messages">, + messageId, newContent, });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
app/components/MessageErrorState.tsx(2 hunks)app/components/Messages.tsx(3 hunks)app/components/chat.tsx(2 hunks)app/hooks/useChatHandlers.ts(5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
app/components/MessageErrorState.tsxapp/components/Messages.tsxapp/components/chat.tsxapp/hooks/useChatHandlers.ts
🧬 Code graph analysis (2)
app/components/Messages.tsx (1)
app/components/MessageErrorState.tsx (1)
MessageErrorState(12-45)
app/hooks/useChatHandlers.ts (1)
lib/token-utils.ts (3)
countInputTokens(132-142)MAX_TOKENS_PRO(4-4)MAX_TOKENS_FREE(5-5)
🔇 Additional comments (3)
app/components/MessageErrorState.tsx (1)
9-10: Prop rename to onRetry is correctly threaded and boundCallback type and usage look consistent with the rest of the UI. No issues.
Also applies to: 14-15, 34-35
app/components/chat.tsx (1)
237-238: Retry handler is correctly exposed to MessagesDestructuring handleRetry from useChatHandlers and passing onRetry={handleRetry} is consistent and non-breaking.
Also applies to: 318-319
app/hooks/useChatHandlers.ts (1)
7-11: Plan-aware token utilities import looks goodImports are correct and match the new per-plan limits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/api/chat/route.ts (1)
68-75: AddisProtogetMessagesByChatIdsignature and update its call site
The call inapp/api/chat/route.tsnow passesisPro, but the function inlib/db/actions.ts(around line 167) only destructureschatIdanduserId. Extend its parameter list to includeisPro(and any other new fields) and ensure all callers match.
♻️ Duplicate comments (3)
lib/ai/providers.ts (2)
6-22: Type the provider map with a key union andLanguageModelV1.Strong typing catches mismatched model keys at compile time and improves IntelliSense.
Apply:
-import { customProvider } from "ai"; +import { customProvider, type LanguageModelV1 } from "ai"; @@ -const baseProviders = { +type ModelKey = + | "ask-model" + | "agent-model" + | "vision-model" + | "vision-base64-model" + | "title-generator-model"; + +const baseProviders: Record<ModelKey, LanguageModelV1> = { "ask-model": openrouter( process.env.NEXT_PUBLIC_ASK_MODEL || "deepseek/deepseek-chat-v3.1", ), "agent-model": openrouter( process.env.NEXT_PUBLIC_AGENT_MODEL || "qwen/qwen3-coder", ), "vision-model": openrouter( process.env.NEXT_PUBLIC_VISION_MODEL || "google/gemini-2.5-flash", ), "vision-base64-model": openrouter( process.env.NEXT_PUBLIC_VISION_BASE64_MODEL || "google/gemini-2.5-flash", ), "title-generator-model": openrouter( process.env.NEXT_PUBLIC_TITLE_MODEL || "google/gemini-2.5-flash", ), };
39-41: ReplaceanywithLanguageModelV1and key union.Keeps the provider surface strictly typed and prevents accidental non-model assignments.
Apply:
- const trackedModels: Record<string, any> = {}; + const trackedModels = {} as Record<ModelKey, LanguageModelV1>;app/api/chat/route.ts (1)
99-99: Duplicate: server-safe PostHog client.Same concern as in providers.ts—ensure server uses the Node client.
🧹 Nitpick comments (4)
lib/ai/providers.ts (1)
53-56: Optional: declare an explicit return type.Helps downstream typing when consuming
createTrackedProvider. If available, expose and use the provider type fromai(e.g.,ProviderV1).app/api/chat/route.ts (3)
90-95: EnsureselectedModelmatches provider key union.If
selectedModelever diverges from the provider keys,languageModel(selectedModel)will fail at runtime. Consider sharing aModelKeyunion across selection and provider layers. I can extract and wire this up if helpful.
125-125: Optional: add a defensive fallback for unknown models.If selection ever returns an unknown key, fallback to
"ask-model"(or a safe default) to avoid 500s.Example:
- model: trackedProvider.languageModel(selectedModel), + model: trackedProvider.languageModel( + (["ask-model","agent-model","vision-model","vision-base64-model","title-generator-model"] as const) + .includes(selectedModel as any) + ? (selectedModel as any) + : "ask-model", + ),
135-141: Sort by latency for Pro and price for FreeOpenRouter’s
provider.sortsupports"price","latency", and"throughput". Update the code to explicitly set"latency"for Pro and"price"for Free:openrouter: { provider: { - ...(!isPro && { - sort: "price", - }), + ...(isPro + ? { sort: "latency" } + : { sort: "price" }), }, },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
app/api/chat/route.ts(4 hunks)app/components/Messages.tsx(3 hunks)app/components/chat.tsx(2 hunks)lib/ai/providers.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- app/components/chat.tsx
- app/components/Messages.tsx
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)
**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Useas constfor string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}
Files:
app/api/chat/route.tslib/ai/providers.ts
🧬 Code graph analysis (2)
app/api/chat/route.ts (5)
lib/rate-limit.ts (1)
checkRateLimit(7-92)lib/chat/chat-processor.ts (1)
processChatMessages(69-123)lib/db/actions.ts (1)
getUserCustomization(224-238)app/posthog.js (1)
PostHogClient(3-17)lib/ai/providers.ts (1)
createTrackedProvider(28-56)
lib/ai/providers.ts (1)
app/posthog.js (1)
PostHogClient(3-17)
🔇 Additional comments (7)
lib/ai/providers.ts (4)
3-5: Confirm server-safe PostHog client usage.This API route runs on the server; ensure
PostHogClient()returns a Node-compatible client (e.g., fromposthog-node), notposthog-js. Otherwise tracing may break in Edge/Node runtimes.Would you confirm the imported
PostHoginapp/posthog.jscomes fromposthog-nodefor server code paths?
16-21: Base64 and non-base64 vision defaults are identical—confirm intent.If base64 inputs require a distinct model/route, using the same default may cause degraded results or failures.
Is
google/gemini-2.5-flashconfirmed to handle both URL and base64 image inputs equivalently in your usage?
24-26: LGTM: centralized provider wiring.Sourcing
languageModelsfrom the typed base map keepsmyProviderminimal and consistent.
41-51: No changes needed:withTracingoption names are correct. These keys (posthogDistinctId,posthogProperties,posthogPrivacyMode) align with the@posthog/aidocumentation.app/api/chat/route.ts (3)
31-31: LGTM: switch to tracked provider.Keeps analytics concerns inside the provider factory.
86-88: LGTM: mode-aware rate limiting.Passing
modeandisProaligns with the limiter’s sliding window config.
122-123: LGTM: per-request traced provider with user/chat context.Good place to attach analytics metadata and privacy mode.
Summary by CodeRabbit
New Features
Removed Features
Chores