-
-
Notifications
You must be signed in to change notification settings - Fork 743
fix: ensure consistent Task/Response formatting across all LLM providers #961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Modified display condition to show formatted output for both OpenAI and custom LLM providers (like Gemini) - Added _final_display_shown flag to prevent duplicate displays within same execution - Reset flag at start of each chat() call for new conversations - Maintains backward compatibility and existing verbose control behavior Fixes #958 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
|
Warning Rate limit exceeded@MervinPraison has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 28 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request aims to standardize the display formatting of LLM responses across all providers within PraisonAI, ensuring a consistent user experience. It resolves an issue where custom LLM outputs lacked the structured 'Task/Response' formatting seen with OpenAI, and it introduces a mechanism to prevent redundant output displays during execution.
Highlights
- Consistent LLM Output Formatting: I've addressed an inconsistency where different LLM providers (e.g., Gemini vs. OpenAI) displayed responses differently. Previously, Gemini responses were plain text, while OpenAI responses were formatted in 'Task/Response' boxes. This change ensures all providers now show consistent, formatted output when verbose mode is enabled.
- Duplicate Display Prevention: I've introduced a new internal flag,
_final_display_shown, to prevent duplicate display of the final interaction within a single execution. This flag is set after the first display and reset at the beginning of each newchatconversation. - Refined Display Logic: The condition for displaying the final interaction in
_execute_callback_and_displayhas been updated fromself.verbose and not self._using_custom_llmtoself.verbose and not self._final_display_shown. This change allows custom LLM providers to also benefit from the formatted output, while the new flag handles duplication.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
Claude finished @MervinPraison's task —— View job PR Review & Analysis ✅Completed detailed review of PR #961 for consistent Task/Response formatting across LLM providers. Todo List:
Changes Analysis:The PR makes 3 key changes:
Problem Addressed:
Implementation Analysis:✅ Logic is Sound: The fix correctly addresses the issue by allowing both OpenAI and custom LLM providers to show formatted output Backward Compatibility Analysis:✅ No Features Removed: All existing functionality preserved Testing & Bug Fix:✅ Logic Verified: The fix correctly modifies only the display condition while preserving all other logic Final Recommendations:🎯 APPROVED - The PR successfully achieves its goals with minimal code changes✅ Original PR Changes:
✅ Additional Bug Fix Applied:
📋 Summary:
The PR is ready to merge with the additional bug fix applied. Branch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
An excellent pull request that aims to unify the output formatting for all LLM providers by introducing a _final_display_shown flag to prevent duplicate outputs. The implementation of the flag itself is sound. However, the primary change doesn't achieve its goal because the modified display logic is in a function that is not called for custom LLM providers. I've left a critical comment detailing why the fix is incomplete and what additional changes are needed to make it work as intended.
…lay behavior - Added _final_display_shown = False reset in achat() method to match chat() method - Ensures proper display flag management for both sync and async agent interactions - Maintains consistency across all agent conversation entry points 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
|
@claude review if the changes made in the above branch are as per the requirements and make changes to the above branch if issues still exist. |
|
Claude finished @MervinPraison's task —— View job Review of PR Changes ✅Comprehensive analysis of the branch changes completed. Todo List:
Final Assessment ✅🎯 APPROVED - The PR Successfully Achieves Its GoalsChanges Analysis:
Problem Addressed:
Implementation Verification: Issues from Previous Reviewers: Backward Compatibility Analysis: 📋 Final Recommendations:✅ The PR is ready to merge - no additional changes required
The implementation successfully achieves the goal of consistent formatting across all LLM providers with minimal code changes while maintaining full backward compatibility. Branch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Callback Execution and Display Issues
This commit introduces two bugs in the _execute_callback_and_display method:
- Task Information Loss:
task_name,task_description, andtask_idare hardcoded toNonein thedisplay_interactioncall, discarding available task context that was previously displayed. - Display Suppression: The
_final_display_shownflag prevents all subsequentdisplay_interactioncalls within a chat session after the first, hiding legitimate intermediate outputs (e.g., tool execution, reflection) instead of just preventing duplicate final displays.
src/praisonai-agents/praisonaiagents/agent/agent.py#L1179-L1189
PraisonAI/src/praisonai-agents/praisonaiagents/agent/agent.py
Lines 1179 to 1189 in a9641ad
| # This ensures both OpenAI and custom LLM providers (like Gemini) show formatted output | |
| if self.verbose and not self._final_display_shown: | |
| display_interaction(prompt, response, markdown=self.markdown, | |
| generation_time=generation_time, console=self.console, | |
| agent_name=self.name, | |
| agent_role=self.role, | |
| agent_tools=[t.__name__ for t in self.tools] if self.tools else None, | |
| task_name=None, # Not available in this context | |
| task_description=None, # Not available in this context | |
| task_id=None) # Not available in this context | |
| self._final_display_shown = True |
Was this report helpful? Give feedback by reacting with 👍 or 👎
Fixes the formatting difference between OpenAI and Gemini responses in PraisonAI.
Problem
Gemini responses only showed plain text output while OpenAI responses showed formatted Task/Response boxes.
Solution
Testing
Closes #958
🤖 Generated with Claude Code