-
-
Notifications
You must be signed in to change notification settings - Fork 756
fix: prevent double output when using litellm models #789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Check for self._using_custom_llm before calling display_interaction - Custom LLM already handles display in its get_response method - Fixes issue #787 where task/response printed twice with litellm Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
WalkthroughThe change updates the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent
participant LLM
User->>Agent: start("What is 2+2?")
Agent->>LLM: Generate response
LLM-->>Agent: "4"
alt Not using custom LLM and verbose
Agent->>Agent: display_interaction()
end
Agent-->>User: Output response
Assessment against linked issues
Assessment against linked issues: Out-of-scope changesNo out-of-scope changes found. Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a specific bug where using custom Large Language Models (LLMs), particularly those integrated via LiteLLM, resulted in redundant output being displayed to the user. The core change involves adding checks to ensure that the internal display_interaction function is only invoked when a custom LLM is not responsible for handling the output display, thereby streamlining the user experience and eliminating duplicate console messages.
Highlights
- Duplicate Output Fix: Implemented conditional checks in the
Agent'schatmethod to preventdisplay_interaction()from being called when a custom LLM (such as LiteLLM models) is in use. This resolves the issue of task and response being printed twice, as custom LLMs already handle their own display.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
@claude review again if this fixies the issus without any features removed also with backward compatibility with multiple agents parallely run |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
Claude finished @MervinPraison's task —— View job Code Review for PR #789I've analyzed the changes to fix the double output issue when using litellm models. Todo List:
Review SummaryThe PR correctly addresses issue #787 by adding conditional checks to prevent ✅ Positive Aspects:
|
PR Code Suggestions ✨No code suggestions found for the PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses the double output issue when using litellm models by adding a check for _using_custom_llm before displaying interactions. The logic is sound and the changes are well-targeted.
I've added one suggestion to improve consistency regarding the verbose flag, which will also fix a potential bug where interactions might be displayed even when verbosity is turned off. Applying this will make the code more robust and maintainable.
| # Only display interaction if not using custom LLM (to avoid double output) | ||
| if not self._using_custom_llm: | ||
| display_interaction(original_prompt, response_text, markdown=self.markdown, generation_time=time.time() - start_time, console=self.console) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency with the change at lines 1236-1239 and to correctly handle the verbosity setting, it's a good practice to also check for self.verbose here. The display_interaction function doesn't check for verbosity itself, so the caller should be responsible for it.
This same logic applies to the other display_interaction calls you've modified in this PR (lines 1285-1287, 1310-1312, and 1327-1329).
Applying this change will make the display behavior consistent across the chat method.
| # Only display interaction if not using custom LLM (to avoid double output) | |
| if not self._using_custom_llm: | |
| display_interaction(original_prompt, response_text, markdown=self.markdown, generation_time=time.time() - start_time, console=self.console) | |
| # Only display interaction if not using custom LLM (to avoid double output) and verbose is True | |
| if self.verbose and not self._using_custom_llm: | |
| display_interaction(original_prompt, response_text, markdown=self.markdown, generation_time=time.time() - start_time, console=self.console) |
|
@claude fix above issue and then re verify and review if the implementation is correct after making changes to the file with multi agetns parallely run |
|
Claude finished @MervinPraison's task —— View job Code Review Completed ✅I've successfully fixed the inconsistency issue and thoroughly reviewed the implementation. Todo List:
Changes AppliedAdded # Now all checks follow the pattern:
if self.verbose and not self._using_custom_llm:
display_interaction(...)This ensures display output only appears when:
Implementation Review✅ Root Cause Analysis Confirmed:
✅ Flag Initialization Verified:
✅ Backward Compatibility Maintained:
✅ Parallel Agent Execution Support:
SummaryThe PR now correctly fixes the double output issue with litellm models while maintaining all existing functionality. The implementation is clean, consistent, and properly tested for various scenarios including parallel agent execution. The fix is ready for merge! 🚀 |
- Applied consistent pattern to all 4 display_interaction calls - Now all checks follow pattern: if self.verbose and not self._using_custom_llm - Ensures display is only shown when verbose mode is enabled AND not using custom LLM - Maintains backward compatibility while fixing the inconsistency Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
User description
Fixes #787
Summary
This PR fixes the issue where task and response are printed twice when using litellm models (e.g., "gemini/gemini-1.5-flash-8b").
Changes
self._using_custom_llmbefore callingdisplay_interaction()in the Agent's chat methodget_responsemethodTesting
The fix ensures that when using litellm models, the task and response are displayed only once instead of twice.
Generated with Claude Code
PR Type
Bug fix
Description
Fixed double output display when using litellm models
Added conditional checks to prevent duplicate task/response printing
Modified 5 locations in agent.py chat method
Changes diagram
Changes walkthrough 📝
agent.py
Prevent double output for custom LLM providerssrc/praisonai-agents/praisonaiagents/agent/agent.py
not self._using_custom_llmcondition to 5display_interaction()calls
Summary by CodeRabbit