-
-
Notifications
You must be signed in to change notification settings - Fork 7
Fix duplicate output in chat system #217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This commit fixes two sources of duplicate output in the chat system: 1. **Duplicate user messages:** The client-side components (`ChatPanel` and `SearchRelated`) were optimistically adding your messages to the UI state before the server had processed them. This resulted in your message appearing twice. This commit removes the optimistic UI updates, making the server the single source of truth for the chat history. 2. **Duplicate assistant responses:** I discovered that in some situations, I was adding the `answerSection` to the UI stream twice. I've added a condition to prevent this, ensuring the final answer is rendered only once.
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
|
WalkthroughShifts chat UI to stop appending synthetic user messages on submit, adopts a streamed inquiry model for Copilot, refactors the inquire agent to drive UI via a streamable value with an expanded system prompt, and gates an initial researcher UI update based on model usage. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Copilot UI
participant objectStream
participant Inquire Agent
participant Backend
User->>Copilot UI: Open Copilot
Inquire Agent->>objectStream: createStreamableValue()
Copilot UI->>objectStream: subscribe (useStreamableValue)
Inquire Agent->>objectStream: update(partial inquiry chunks)
objectStream-->>Copilot UI: streamed data updates
User->>Copilot UI: Submit form
Copilot UI->>Backend: submit(formData, skip)
Backend-->>Copilot UI: response
Copilot UI->>Copilot UI: append response only (no user echo)
Inquire Agent->>objectStream: done()
sequenceDiagram
participant User
participant Chat Component
participant Messages State
User->>Chat Component: Submit input
Chat Component->>Messages State: (no user message appended)
Chat Component-->>Messages State: append responseMessage when received
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨No code suggestions found for the PR. |
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||
I've addressed two issues with this latest commit: 1. **Question preview not working:** A recent change to fix duplicate output inadvertently broke the question preview functionality. This was due to a misunderstanding of how to handle streamable values on the server. I've refactored the `Copilot` component and related server-side code to use streamable values correctly, which restores the question preview. 2. **Duplicate user messages in followup panel:** The `FollowupPanel` component was still using an optimistic UI update, which could cause duplicate user messages. I removed this optimistic update, making it consistent with the other input components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
components/search-related.tsx (1)
35-42: Unused variablequeryafter removing user message logic.The
queryvariable is extracted from the form but is no longer used after removing the user message creation logic.Consider removing the unused variable to clean up the code:
- // // Get the submitter of the form - const submitter = (event.nativeEvent as SubmitEvent) - .submitter as HTMLInputElement - let query = '' - if (submitter) { - formData.append(submitter.name, submitter.value) - query = submitter.value - } + // Get the submitter of the form + const submitter = (event.nativeEvent as SubmitEvent) + .submitter as HTMLInputElement + if (submitter) { + formData.append(submitter.name, submitter.value) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (6)
components/chat-panel.tsx(0 hunks)components/copilot.tsx(3 hunks)components/followup-panel.tsx(1 hunks)components/search-related.tsx(1 hunks)lib/agents/inquire.tsx(1 hunks)lib/agents/researcher.tsx(1 hunks)
💤 Files with no reviewable changes (1)
- components/chat-panel.tsx
🧰 Additional context used
🧬 Code Graph Analysis (4)
components/search-related.tsx (5)
lib/actions/chat-db.ts (1)
msg(117-121)components/chat-panel.tsx (3)
e(47-64)ChatPanel(21-187)currentMessages(53-59)lib/actions/chat.ts (1)
msg(119-127)app/actions.tsx (2)
state(290-340)message(347-476)components/chat-messages.tsx (1)
ChatMessagesProps(7-9)
components/followup-panel.tsx (2)
components/chat-panel.tsx (1)
e(47-64)app/actions.tsx (2)
submit(35-249)state(290-340)
components/copilot.tsx (2)
components/copilot-display.tsx (2)
CopilotDisplay(12-30)CopilotDisplayProps(8-10)app/actions.tsx (1)
submit(35-249)
lib/agents/inquire.tsx (7)
components/copilot.tsx (1)
Copilot(25-186)lib/utils/index.ts (1)
getModel(19-62)lib/schema/inquiry.tsx (1)
inquirySchema(4-20)app/actions.tsx (3)
processEvents(103-239)submit(35-249)aiState(343-478)lib/agents/writer.tsx (1)
writer(7-48)lib/agents/query-suggestor.tsx (1)
querySuggestor(8-50)lib/agents/tools/index.tsx (1)
ToolProps(7-11)
🔇 Additional comments (8)
lib/agents/researcher.tsx (2)
74-78: Condition correctly prevents duplicate assistant responses.The added condition checking
!useSpecificModelprevents theanswerSectionfrom being added to the UI stream when using a specific model, effectively avoiding duplicate assistant responses in the chat. This aligns well with the PR's objective to fix duplicate output issues.
92-94: Check consistency with the new gating condition.Line 93 appends the
answerSectionwhen tool responses are received, but this doesn't check theuseSpecificModelflag like the new condition at lines 74-78. This could still lead to duplicate assistant responses when using a specific model if tool responses are present.Consider updating the condition to maintain consistency:
- if (!useSpecificModel && toolResponses.length === 0 && delta.result) { + if (!useSpecificModel && toolResponses.length === 0 && delta.result) { uiStream.append(answerSection) }Wait, I see the condition already includes
!useSpecificModel. The logic appears correct - it only appends when not using a specific model.components/search-related.tsx (1)
46-46: Successfully removes duplicate user messages.The change correctly removes the optimistic user message creation, making the server the single source of truth for chat history. Only the
responseMessageis now appended to the messages state, which aligns with the PR's objective to fix duplicate output.components/followup-panel.tsx (1)
24-24: Correctly eliminates duplicate user messages in follow-up panel.The removal of optimistic user message creation ensures that only server-processed responses are added to the chat history, successfully addressing the duplicate output issue.
components/copilot.tsx (2)
19-26: Successful migration to streaming data model.The component now properly consumes
StreamableValue<PartialInquiry>and usesuseStreamableValueto read the streaming data. This change aligns with the new streaming architecture and helps prevent duplicate outputs by relying on server-controlled data flow.
125-161: Correctly references streaming data throughout the component.All UI references have been properly updated from
value.*todata.*to work with the new streaming data model. The changes are consistent and maintain the component's functionality while supporting the new architecture.lib/agents/inquire.tsx (2)
11-12: Well-implemented streaming pattern for Copilot UI.The introduction of
objectStreamand immediate UI update withobjectStream.valueestablishes a clean streaming pattern. This ensures the Copilot component receives updates through aStreamableValue, which is properly consumed in the component.
15-36: Robust error handling with guaranteed stream cleanup.The try/finally block ensures
objectStream.done()is always called, preventing potential memory leaks or hanging streams. This is a best practice for stream management.
| system: `As a professional writer, your job is to generate a comprehensive and informative, yet concise answer of 400 words or less for the given question based solely on the provided search results (URL and content). You must only use information from the provided search results. Use an unbiased and journalistic tone. Combine search results together into a coherent answer. Do not repeat text. If there are any images relevant to your answer, be sure to include them as well. Aim to directly address the user's question, augmenting your response with insights gleaned from the search results. | ||
| Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly. Please match the language of the response to the user's language. | ||
| Always answer in Markdown format. Links and images must follow the correct format. | ||
| Link format: [link text](url) | ||
| Image format:  | ||
| `, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
System prompt doesn't match the inquiry context.
The system prompt appears to be copied from a writer/researcher agent and talks about "search results", "URLs", and generating comprehensive answers. However, this inquire function is meant to generate inquiry questions for the Copilot UI to gather more information from the user, not to answer questions based on search results.
The system prompt should be focused on generating clarifying questions or gathering additional context from the user. Consider updating it to something like:
- system: `As a professional writer, your job is to generate a comprehensive and informative, yet concise answer of 400 words or less for the given question based solely on the provided search results (URL and content). You must only use information from the provided search results. Use an unbiased and journalistic tone. Combine search results together into a coherent answer. Do not repeat text. If there are any images relevant to your answer, be sure to include them as well. Aim to directly address the user's question, augmenting your response with insights gleaned from the search results.
- Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly. Please match the language of the response to the user's language.
- Always answer in Markdown format. Links and images must follow the correct format.
- Link format: [link text](url)
- Image format: 
- `,
+ system: `As an AI assistant, your task is to generate a clarifying inquiry when the user's request needs more specific information. Create a focused question with relevant options that help narrow down what the user is looking for. The inquiry should be clear, concise, and directly related to the user's original request. Match the language of the inquiry to the user's language.`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| system: `As a professional writer, your job is to generate a comprehensive and informative, yet concise answer of 400 words or less for the given question based solely on the provided search results (URL and content). You must only use information from the provided search results. Use an unbiased and journalistic tone. Combine search results together into a coherent answer. Do not repeat text. If there are any images relevant to your answer, be sure to include them as well. Aim to directly address the user's question, augmenting your response with insights gleaned from the search results. | |
| Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly. Please match the language of the response to the user's language. | |
| Always answer in Markdown format. Links and images must follow the correct format. | |
| Link format: [link text](url) | |
| Image format:  | |
| `, | |
| system: `As an AI assistant, your task is to generate a clarifying inquiry when the user's request needs more specific information. Create a focused question with relevant options that help narrow down what the user is looking for. The inquiry should be clear, concise, and directly related to the user's original request. Match the language of the inquiry to the user's language.`, |
🤖 Prompt for AI Agents
In lib/agents/inquire.tsx around lines 18 to 23 the system prompt is incorrect
for the inquire agent: it instructs the agent to synthesize answers from search
results instead of producing clarifying questions for the Copilot UI. Replace
the prompt with one that instructs the agent to generate concise,
context-gathering clarifying questions (matching the user language), tailored to
the Copilot UI flow, avoid instructing the agent to use search results or
produce final answers, do not force Markdown or strict word counts, and ensure
the prompt emphasizes brevity, optional follow-ups, and that questions are
actionable for the user to provide missing info.
User description
This commit fixes two sources of duplicate output in the chat system:
Duplicate user messages: The client-side components (
ChatPanelandSearchRelated) were optimistically adding your messages to the UI state before the server had processed them. This resulted in your message appearing twice. This commit removes the optimistic UI updates, making the server the single source of truth for the chat history.Duplicate assistant responses: I discovered that in some situations, I was adding the
answerSectionto the UI stream twice. I've added a condition to prevent this, ensuring the final answer is rendered only once.PR Type
Bug fix
Description
Remove optimistic UI updates from chat components
Prevent duplicate assistant responses in streaming
Make server single source of truth for chat history
Fix duplicate message rendering in chat system
Diagram Walkthrough
File Walkthrough
chat-panel.tsx
Remove optimistic user message updatescomponents/chat-panel.tsx
search-related.tsx
Remove optimistic message handling in searchcomponents/search-related.tsx
researcher.tsx
Prevent duplicate assistant response streaminglib/agents/researcher.tsx
answerSectionadditionsuseSpecificModelflag before updating UI streamSummary by CodeRabbit