Skip to content

Conversation

@rossmanko
Copy link
Contributor

@rossmanko rossmanko commented Sep 21, 2025

Summary by CodeRabbit

  • New Features

    • Pro users now experience adaptive moderation with wider allowed content and contextual authorization when appropriate.
    • Agent mode includes enhanced reasoning options; other modes remain streamlined.
  • Bug Fixes

    • Improved chat error handling to gracefully handle access issues without failing, returning empty results when appropriate.
  • Chores

    • Updated default AI models for general chat, vision, and title generation to newer versions for better quality and consistency.

@vercel
Copy link

vercel bot commented Sep 21, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
hackerai Ready Ready Preview Comment Sep 21, 2025 8:36pm

@coderabbitai
Copy link

coderabbitai bot commented Sep 21, 2025

Walkthrough

Conditional inclusion of reasoning options in chat API for agent mode. Broadened message fetch error handling to include unauthorized access. Updated model/provider defaults for ask, vision, and title generation. Moderation flow now gated by isPro, with updated signatures and thresholds to adjust uncensoring behavior for Pro users.

Changes

Cohort / File(s) Summary
Chat API reasoning options
app/api/chat/route.ts
Includes reasoningSummary/reasoningEffort in providerOptions only when mode === "agent"; otherwise omitted.
Moderation gating and thresholds
lib/chat/chat-processor.ts, lib/moderation.ts
Moderation runs only for isPro; getModerationResult and determineShouldUncensorResponse now accept isPro, adjusting max moderation threshold (0.98 for Pro, 0.9 otherwise). Auth message added only when shouldUncensorResponse for Pro.
Message access error handling
convex/messages.ts
Treats CHAT_NOT_FOUND and CHAT_UNAUTHORIZED the same: returns empty results with isDone true and empty continueCursor; comments updated.
Provider/model defaults
lib/ai/providers.ts
ask-model default updated to "deepseek/deepseek-chat-v3.1"; vision-model and title-generator-model switched to OpenAI with defaults "gpt-4.1-2025-04-14" and "gpt-4.1-mini-2025-04-14".

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ChatAPI as Chat API
  participant ChatProc as Chat Processor
  participant Moderation as Moderation
  participant Provider as Model Provider

  User->>ChatAPI: Send chat request (mode, messages)
  note over ChatAPI: If mode === "agent", include reasoning options
  ChatAPI->>ChatProc: Process(messages, isPro)

  alt isPro
    ChatProc->>Moderation: getModerationResult(messages, isPro=true)
    Moderation-->>ChatProc: { shouldUncensorResponse, ... }
    opt shouldUncensorResponse
      ChatProc->>ChatProc: addAuthMessage
    end
  else not Pro
    ChatProc->>ChatProc: Skip moderation
  end

  ChatProc->>Provider: Generate response
  Provider-->>ChatProc: Model output
  ChatProc-->>ChatAPI: Final response
  ChatAPI-->>User: Return result
Loading
sequenceDiagram
  autonumber
  actor Client
  participant Messages as convex/messages.getMessagesByChatId
  Client->>Messages: Fetch(chatId, cursor)
  alt Chat OK
    Messages-->>Client: Messages + continueCursor
  else CHAT_NOT_FOUND or CHAT_UNAUTHORIZED
    note over Messages: Access errors handled uniformly
    Messages-->>Client: [] with isDone=true, continueCursor=""
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

I nudge the knobs from burrowed lair,
Flip Pro switch—moderation’s fair.
New models hum, a softer light,
Agent mode dons reason bright.
If doors are closed, we hop on by—
Empty basket, clear blue sky.
Thump-thump: ship it, swift and spry! 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The PR title "Daily branch 2025 09 21" is a generic branch/date label that does not describe the substantive changes in this request (such as conditional agent reasoning options, provider/model defaults updates, expanded chat error handling, and gating moderation for Pro users), so it is too vague for teammates scanning history. Please rename the title to a concise single-sentence summary of the primary change (for example: "Gate moderation to Pro users; update default models and make agent reasoning conditional"), or choose the single most important change as the title and include other details in the PR description.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch daily-branch-2025-09-21

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b765a1f and 8188687.

📒 Files selected for processing (5)
  • app/api/chat/route.ts (1 hunks)
  • convex/messages.ts (1 hunks)
  • lib/ai/providers.ts (1 hunks)
  • lib/chat/chat-processor.ts (1 hunks)
  • lib/moderation.ts (4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)

**/*.{ts,tsx}: Use Id helper type from ./_generated/dataModel to type document IDs (e.g., Id<'users'>) instead of string
When defining Record types, specify key and value types matching validators (e.g., Record<Id<'users'>, string>)
Be strict with types for document IDs; prefer Id<'table'> over string in function args and variables
Use as const for string literals in discriminated unions
Declare arrays with explicit generic type: const arr: Array = [...]
Declare records with explicit generic types: const record: Record<KeyType, ValueType> = {...}

Files:

  • app/api/chat/route.ts
  • lib/moderation.ts
  • convex/messages.ts
  • lib/ai/providers.ts
  • lib/chat/chat-processor.ts
convex/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)

convex/**/*.ts: Always use the new Convex function syntax (query/mutation/action objects with args/returns/handler) when defining Convex functions
When a function returns null, include returns: v.null() and return null explicitly
Use internalQuery/internalMutation/internalAction for private functions callable only by other Convex functions; do not expose sensitive logic via public query/mutation/action
Use query/mutation/action only for public API functions
Do not try to register functions via the api or internal objects
Always include argument and return validators for all Convex functions (query/internalQuery/mutation/internalMutation/action/internalAction)
In JS implementations, functions without an explicit return value implicitly return null
Use ctx.runQuery from queries/mutations/actions to call a query
Use ctx.runMutation from mutations/actions to call a mutation
Use ctx.runAction from actions to call an action
Only call an action from another action when crossing runtimes (e.g., V8 to Node); otherwise extract shared helper code
Minimize calls from actions to queries/mutations to avoid race conditions from splitting transactions
Pass FunctionReference values (from api/internal) to ctx.runQuery/ctx.runMutation/ctx.runAction; do not pass function implementations
When calling a function in the same file via ctx.run*, add an explicit return type annotation at the call site to avoid TS circularity
Use the generated api object for public functions and internal object for internal functions from convex/_generated/api.ts
Respect file-based routing for function references: e.g., convex/example.ts export f -> api.example.f; nested paths map to dot-separated namespaces
For paginated queries use paginationOptsValidator in args and .paginate(args.paginationOpts) on a query
v.bigint() is deprecated; use v.int64() for signed 64-bit integers
Use v.record(keys, values) for record-like data; v.map() and v.set() are not supported
For full-text search, use withSearchIndex("ind...

Files:

  • convex/messages.ts
🧬 Code graph analysis (1)
lib/chat/chat-processor.ts (1)
lib/moderation.ts (1)
  • getModerationResult (5-62)
🔇 Additional comments (8)
convex/messages.ts (1)

216-221: LGTM! Enhanced error handling for unauthorized access.

The expansion to handle both CHAT_NOT_FOUND and CHAT_UNAUTHORIZED error codes is appropriate and aligns with the broader authorization changes in the PR. This ensures that unauthorized access attempts are handled gracefully with empty results rather than propagating errors to the frontend.

app/api/chat/route.ts (1)

174-177: Conditional reasoning parameters for agent mode look correct.

The reasoningSummary: "detailed" and reasoningEffort: "medium" parameters are valid OpenAI reasoning model settings, and the conditional inclusion only when mode === "agent" is appropriate given the PR objective to enable reasoning for agent mode specifically.

lib/moderation.ts (3)

7-7: Function signature updated appropriately.

The addition of the isPro parameter to align with the conditional moderation logic is correct and follows the coding pattern established across the PR.


47-47: Consistent parameter passing.

The isPro parameter is correctly passed through to determineShouldUncensorResponse.


124-124: Pro user moderation threshold adjustment implemented correctly.

The conditional moderation level based on isPro status (0.98 for Pro users vs 0.9 for free users) appropriately relaxes content restrictions for paying users while maintaining safety guardrails.

Also applies to: 145-145

lib/chat/chat-processor.ts (1)

102-110: Proper gating of moderation logic for Pro users.

The moderation flow is now correctly gated behind the isPro check, and the getModerationResult call includes the required isPro parameter. The authorization message addition is appropriately conditional on both Pro status and moderation results.

lib/ai/providers.ts (2)

9-9: deepseek-chat-v3.1 availability confirmed. The OpenRouter model list includes deepseek/deepseek-chat-v3.1 (and deepseek-v3.1-base), so the updated model string is valid.


12-17: Models verified — no action required

gpt-4.1-2025-04-14 and gpt-4.1-mini-2025-04-14 are valid OpenAI GPT‑4.1 family model identifiers (released Apr 14, 2025).


Comment @coderabbitai help to get the list of available commands and usage tips.

@rossmanko rossmanko merged commit 458fcd8 into main Sep 21, 2025
3 checks passed
This was referenced Oct 3, 2025
@coderabbitai coderabbitai bot mentioned this pull request Nov 9, 2025
@coderabbitai coderabbitai bot mentioned this pull request Nov 20, 2025
@coderabbitai coderabbitai bot mentioned this pull request Dec 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants