Skip to content

Conversation

@ngoiyaeric
Copy link
Collaborator

@ngoiyaeric ngoiyaeric commented Aug 14, 2025

User description

This commit fixes two sources of duplicate output in the chat system:

  1. Duplicate user messages: The client-side components (ChatPanel and SearchRelated) were optimistically adding your messages to the UI state before the server had processed them. This resulted in your message appearing twice. This commit removes the optimistic UI updates, making the server the single source of truth for the chat history.

  2. Duplicate assistant responses: I discovered that in some situations, I was adding the answerSection to the UI stream twice. I've added a condition to prevent this, ensuring the final answer is rendered only once.


PR Type

Bug fix


Description

  • Remove optimistic UI updates from chat components

  • Prevent duplicate assistant responses in streaming

  • Make server single source of truth for chat history

  • Fix duplicate message rendering in chat system


Diagram Walkthrough

flowchart LR
  A["Client Components"] -- "Remove optimistic updates" --> B["Server Processing"]
  B -- "Single source of truth" --> C["Chat History"]
  D["Assistant Responses"] -- "Add condition check" --> E["Prevent Duplicates"]
Loading

File Walkthrough

Relevant files
Bug fix
chat-panel.tsx
Remove optimistic user message updates                                     

components/chat-panel.tsx

  • Remove optimistic user message addition to UI state
  • Keep only server response message handling
  • Simplify form submission flow
+0/-7     
search-related.tsx
Remove optimistic message handling in search                         

components/search-related.tsx

  • Remove optimistic user message creation and addition
  • Simplify message handling to only add server response
  • Clean up form submission logic
+1/-10   
researcher.tsx
Prevent duplicate assistant response streaming                     

lib/agents/researcher.tsx

  • Add condition to prevent duplicate answerSection additions
  • Check useSpecificModel flag before updating UI stream
  • Fix duplicate assistant response rendering
+5/-1     

Summary by CodeRabbit

  • New Features
    • Copilot now streams inquiry details live, updating questions, options, and inputs as they arrive.
    • Enhanced responses: concise formatting with citations, Markdown, and improved link/image outputs; auto-matches user language.
  • UX Improvements
    • Actions (Send/Skip) are disabled while data is loading to prevent duplicate submissions.
    • Chat, follow-up, and search panels no longer insert a separate “your message” entry on submit; only assistant responses appear.
  • Bug Fixes
    • Prevents an initial UI flicker when using specific models during the first response update.

This commit fixes two sources of duplicate output in the chat system:

1.  **Duplicate user messages:** The client-side components (`ChatPanel` and `SearchRelated`) were optimistically adding your messages to the UI state before the server had processed them. This resulted in your message appearing twice. This commit removes the optimistic UI updates, making the server the single source of truth for the chat history.

2.  **Duplicate assistant responses:** I discovered that in some situations, I was adding the `answerSection` to the UI stream twice. I've added a condition to prevent this, ensuring the final answer is rendered only once.
@vercel
Copy link
Contributor

vercel bot commented Aug 14, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Project Deployment Preview Comments Updated (UTC)
qcx Ready Preview Comment Aug 14, 2025 10:02am

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 14, 2025

Walkthrough

Shifts chat UI to stop appending synthetic user messages on submit, adopts a streamed inquiry model for Copilot, refactors the inquire agent to drive UI via a streamable value with an expanded system prompt, and gates an initial researcher UI update based on model usage.

Changes

Cohort / File(s) Summary of changes
User message UI removal
components/chat-panel.tsx, components/followup-panel.tsx, components/search-related.tsx
Removed creation/appending of local userMessage entries during submit; now only appends responseMessage to messages. No exported API changes.
Copilot streamed inquiry
components/copilot.tsx
Switched inquiry prop to StreamableValue<PartialInquiry> using useStreamableValue; updated references from value.* to data.*; removed MCP-related argument from submit; disabled actions based on pending. Public prop signature changed.
Inquire streaming refactor + prompt
lib/agents/inquire.tsx
Replaced per-chunk UI updates with createStreamableValue driving Copilot; added try/finally with objectStream.done(); expanded system prompt with formatting/citation rules; returns Promise<any>.
Researcher initial update gating
lib/agents/researcher.tsx
Added condition to suppress first text-delta UI update when a specific model is used; rest of flow unchanged.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Copilot UI
  participant objectStream
  participant Inquire Agent
  participant Backend

  User->>Copilot UI: Open Copilot
  Inquire Agent->>objectStream: createStreamableValue()
  Copilot UI->>objectStream: subscribe (useStreamableValue)
  Inquire Agent->>objectStream: update(partial inquiry chunks)
  objectStream-->>Copilot UI: streamed data updates
  User->>Copilot UI: Submit form
  Copilot UI->>Backend: submit(formData, skip)
  Backend-->>Copilot UI: response
  Copilot UI->>Copilot UI: append response only (no user echo)
  Inquire Agent->>objectStream: done()
Loading
sequenceDiagram
  participant User
  participant Chat Component
  participant Messages State

  User->>Chat Component: Submit input
  Chat Component->>Messages State: (no user message appended)
  Chat Component-->>Messages State: append responseMessage when received
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

Review effort 3/5

Poem

A twitch of ears, a stream of thought,
No more echo—responses caught.
Inquiry flows in gentle beams,
Carrots of data, crunchy streams. 🥕
Prompts expanded, models wise—
I thump approval, bright-eyed surprise!

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-duplicate-output

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@codiumai-pr-agent-free
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Logic Validation

The condition to prevent duplicate answerSection now checks for !useSpecificModel. Verify this is the correct behavior and that specific models should not update the UI stream in the same way.

if (
  fullResponse.length === 0 &&
  delta.textDelta.length > 0 &&
  !useSpecificModel
) {

@qodo-code-review
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Type Safety

The cast responseMessage as any when appending to setMessages hides potential mismatches between the server message shape and the UI's expected message type. Consider proper typing to prevent runtime issues.

const responseMessage = await submit(formData)
setMessages(currentMessages => [...currentMessages, responseMessage as any])
Unused Variable

The variable query is assigned but no longer used after removing the optimistic user message, which is likely dead code.

if (submitter) {
  formData.append(submitter.name, submitter.value)
  query = submitter.value
}
Logic Change Risk

The added condition !useSpecificModel alters when answerSection is first pushed to the UI stream. Validate that this doesn't suppress expected UI updates for specific models that still stream text deltas.

if (
  fullResponse.length === 0 &&
  delta.textDelta.length > 0 &&
  !useSpecificModel
) {
  // Update the UI
  uiStream.update(answerSection)
}

@codiumai-pr-agent-free
Copy link
Contributor

codiumai-pr-agent-free bot commented Aug 14, 2025

PR Code Suggestions ✨

No code suggestions found for the PR.

@qodo-code-review
Copy link
Contributor

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
General
Handle submit failures safely

A failure in submit will reject and prevent setMessages, leaving the UI without
feedback. Add try/catch and optionally guard against undefined responseMessage
to avoid inserting invalid entries and to keep UX consistent.

components/search-related.tsx [45-46]

-const responseMessage = await submit(formData)
-setMessages(currentMessages => [...currentMessages, responseMessage])
+try {
+  const responseMessage = await submit(formData)
+  if (responseMessage) {
+    setMessages(currentMessages => [...currentMessages, responseMessage])
+  }
+} catch (err) {
+  console.error(err)
+  // optionally show an error message UI entry here
+}
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that an unhandled error in the submit function would crash the component; adding a try/catch block is a valid improvement for robustness and user experience.

Medium
Normalize boolean guard

The check !useSpecificModel may be undefined if useSpecificModel is not
explicitly set, causing unintended UI updates. Normalize it to a boolean and
document intent to avoid accidental duplicate renders. This ensures consistent
behavior regardless of how the parameter is passed.

lib/agents/researcher.tsx [74-81]

+const shouldUseSpecificModel = Boolean(useSpecificModel);
+...
 if (
   fullResponse.length === 0 &&
   delta.textDelta.length > 0 &&
-  !useSpecificModel
+  !shouldUseSpecificModel
 ) {
-  // Update the UI
   uiStream.update(answerSection)
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 2

__

Why: The suggested change is functionally identical to the existing code, as !useSpecificModel and !Boolean(useSpecificModel) evaluate the same way for undefined or boolean values; it only offers a minor readability improvement.

Low
  • More

I've addressed two issues with this latest commit:

1.  **Question preview not working:** A recent change to fix duplicate output inadvertently broke the question preview functionality. This was due to a misunderstanding of how to handle streamable values on the server. I've refactored the `Copilot` component and related server-side code to use streamable values correctly, which restores the question preview.

2.  **Duplicate user messages in followup panel:** The `FollowupPanel` component was still using an optimistic UI update, which could cause duplicate user messages. I removed this optimistic update, making it consistent with the other input components.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
components/search-related.tsx (1)

35-42: Unused variable query after removing user message logic.

The query variable is extracted from the form but is no longer used after removing the user message creation logic.

Consider removing the unused variable to clean up the code:

-    // // Get the submitter of the form
-    const submitter = (event.nativeEvent as SubmitEvent)
-      .submitter as HTMLInputElement
-    let query = ''
-    if (submitter) {
-      formData.append(submitter.name, submitter.value)
-      query = submitter.value
-    }
+    // Get the submitter of the form
+    const submitter = (event.nativeEvent as SubmitEvent)
+      .submitter as HTMLInputElement
+    if (submitter) {
+      formData.append(submitter.name, submitter.value)
+    }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 246c955 and 281d5be.

📒 Files selected for processing (6)
  • components/chat-panel.tsx (0 hunks)
  • components/copilot.tsx (3 hunks)
  • components/followup-panel.tsx (1 hunks)
  • components/search-related.tsx (1 hunks)
  • lib/agents/inquire.tsx (1 hunks)
  • lib/agents/researcher.tsx (1 hunks)
💤 Files with no reviewable changes (1)
  • components/chat-panel.tsx
🧰 Additional context used
🧬 Code Graph Analysis (4)
components/search-related.tsx (5)
lib/actions/chat-db.ts (1)
  • msg (117-121)
components/chat-panel.tsx (3)
  • e (47-64)
  • ChatPanel (21-187)
  • currentMessages (53-59)
lib/actions/chat.ts (1)
  • msg (119-127)
app/actions.tsx (2)
  • state (290-340)
  • message (347-476)
components/chat-messages.tsx (1)
  • ChatMessagesProps (7-9)
components/followup-panel.tsx (2)
components/chat-panel.tsx (1)
  • e (47-64)
app/actions.tsx (2)
  • submit (35-249)
  • state (290-340)
components/copilot.tsx (2)
components/copilot-display.tsx (2)
  • CopilotDisplay (12-30)
  • CopilotDisplayProps (8-10)
app/actions.tsx (1)
  • submit (35-249)
lib/agents/inquire.tsx (7)
components/copilot.tsx (1)
  • Copilot (25-186)
lib/utils/index.ts (1)
  • getModel (19-62)
lib/schema/inquiry.tsx (1)
  • inquirySchema (4-20)
app/actions.tsx (3)
  • processEvents (103-239)
  • submit (35-249)
  • aiState (343-478)
lib/agents/writer.tsx (1)
  • writer (7-48)
lib/agents/query-suggestor.tsx (1)
  • querySuggestor (8-50)
lib/agents/tools/index.tsx (1)
  • ToolProps (7-11)
🔇 Additional comments (8)
lib/agents/researcher.tsx (2)

74-78: Condition correctly prevents duplicate assistant responses.

The added condition checking !useSpecificModel prevents the answerSection from being added to the UI stream when using a specific model, effectively avoiding duplicate assistant responses in the chat. This aligns well with the PR's objective to fix duplicate output issues.


92-94: Check consistency with the new gating condition.

Line 93 appends the answerSection when tool responses are received, but this doesn't check the useSpecificModel flag like the new condition at lines 74-78. This could still lead to duplicate assistant responses when using a specific model if tool responses are present.

Consider updating the condition to maintain consistency:

-        if (!useSpecificModel && toolResponses.length === 0 && delta.result) {
+        if (!useSpecificModel && toolResponses.length === 0 && delta.result) {
           uiStream.append(answerSection)
         }

Wait, I see the condition already includes !useSpecificModel. The logic appears correct - it only appends when not using a specific model.

components/search-related.tsx (1)

46-46: Successfully removes duplicate user messages.

The change correctly removes the optimistic user message creation, making the server the single source of truth for chat history. Only the responseMessage is now appended to the messages state, which aligns with the PR's objective to fix duplicate output.

components/followup-panel.tsx (1)

24-24: Correctly eliminates duplicate user messages in follow-up panel.

The removal of optimistic user message creation ensures that only server-processed responses are added to the chat history, successfully addressing the duplicate output issue.

components/copilot.tsx (2)

19-26: Successful migration to streaming data model.

The component now properly consumes StreamableValue<PartialInquiry> and uses useStreamableValue to read the streaming data. This change aligns with the new streaming architecture and helps prevent duplicate outputs by relying on server-controlled data flow.


125-161: Correctly references streaming data throughout the component.

All UI references have been properly updated from value.* to data.* to work with the new streaming data model. The changes are consistent and maintain the component's functionality while supporting the new architecture.

lib/agents/inquire.tsx (2)

11-12: Well-implemented streaming pattern for Copilot UI.

The introduction of objectStream and immediate UI update with objectStream.value establishes a clean streaming pattern. This ensures the Copilot component receives updates through a StreamableValue, which is properly consumed in the component.


15-36: Robust error handling with guaranteed stream cleanup.

The try/finally block ensures objectStream.done() is always called, preventing potential memory leaks or hanging streams. This is a best practice for stream management.

Comment on lines +18 to +23
system: `As a professional writer, your job is to generate a comprehensive and informative, yet concise answer of 400 words or less for the given question based solely on the provided search results (URL and content). You must only use information from the provided search results. Use an unbiased and journalistic tone. Combine search results together into a coherent answer. Do not repeat text. If there are any images relevant to your answer, be sure to include them as well. Aim to directly address the user's question, augmenting your response with insights gleaned from the search results.
Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly. Please match the language of the response to the user's language.
Always answer in Markdown format. Links and images must follow the correct format.
Link format: [link text](url)
Image format: ![alt text](url)
`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

System prompt doesn't match the inquiry context.

The system prompt appears to be copied from a writer/researcher agent and talks about "search results", "URLs", and generating comprehensive answers. However, this inquire function is meant to generate inquiry questions for the Copilot UI to gather more information from the user, not to answer questions based on search results.

The system prompt should be focused on generating clarifying questions or gathering additional context from the user. Consider updating it to something like:

-      system: `As a professional writer, your job is to generate a comprehensive and informative, yet concise answer of 400 words or less for the given question based solely on the provided search results (URL and content). You must only use information from the provided search results. Use an unbiased and journalistic tone. Combine search results together into a coherent answer. Do not repeat text. If there are any images relevant to your answer, be sure to include them as well. Aim to directly address the user's question, augmenting your response with insights gleaned from the search results.
-    Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly. Please match the language of the response to the user's language.
-    Always answer in Markdown format. Links and images must follow the correct format.
-    Link format: [link text](url)
-    Image format: ![alt text](url)
-    `,
+      system: `As an AI assistant, your task is to generate a clarifying inquiry when the user's request needs more specific information. Create a focused question with relevant options that help narrow down what the user is looking for. The inquiry should be clear, concise, and directly related to the user's original request. Match the language of the inquiry to the user's language.`,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
system: `As a professional writer, your job is to generate a comprehensive and informative, yet concise answer of 400 words or less for the given question based solely on the provided search results (URL and content). You must only use information from the provided search results. Use an unbiased and journalistic tone. Combine search results together into a coherent answer. Do not repeat text. If there are any images relevant to your answer, be sure to include them as well. Aim to directly address the user's question, augmenting your response with insights gleaned from the search results.
Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly. Please match the language of the response to the user's language.
Always answer in Markdown format. Links and images must follow the correct format.
Link format: [link text](url)
Image format: ![alt text](url)
`,
system: `As an AI assistant, your task is to generate a clarifying inquiry when the user's request needs more specific information. Create a focused question with relevant options that help narrow down what the user is looking for. The inquiry should be clear, concise, and directly related to the user's original request. Match the language of the inquiry to the user's language.`,
🤖 Prompt for AI Agents
In lib/agents/inquire.tsx around lines 18 to 23 the system prompt is incorrect
for the inquire agent: it instructs the agent to synthesize answers from search
results instead of producing clarifying questions for the Copilot UI. Replace
the prompt with one that instructs the agent to generate concise,
context-gathering clarifying questions (matching the user language), tailored to
the Copilot UI flow, avoid instructing the agent to use search results or
produce final answers, do not force Markdown or strict word counts, and ensure
the prompt emphasizes brevity, optional follow-ups, and that questions are
actionable for the user to provide missing info.

@ngoiyaeric ngoiyaeric closed this Aug 24, 2025
@ngoiyaeric ngoiyaeric deleted the fix-duplicate-output branch September 10, 2025 07:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants