Skip to content

Conversation

@MervinPraison
Copy link
Owner

@MervinPraison MervinPraison commented Jun 6, 2025

User description

Fixes #612

Summary

This PR fixes the duplicate Task and Response printing issue when using litellm with memory in PraisonAI agents.

Root Cause

The issue occurred because the LLM class was calling display_interaction() multiple times internally while the Agent class also expected to handle display, causing duplicates when using custom LLM instances.

Solution

Implemented a minimal fix with backward compatibility:

  • Added suppress_display parameter to LLM.get_response() method (default: False)
  • Updated all internal display_interaction() calls in LLM class to respect this parameter
  • Modified agent calls to pass suppress_display=True when using custom LLM instances
  • Agent handles display centrally to ensure single display per interaction

Changes

  • llm.py:293: Added suppress_display: bool = False parameter
  • llm.py: Updated ~4 display calls to check verbose and not suppress_display
  • agent.py:1112: Added suppress_display=True to LLM calls
  • agent.py:1119-1121: Added centralized Agent-level display

Backward Compatibility

This change maintains full backward compatibility as the new parameter defaults to False.

Generated with Claude Code


PR Type

Bug fix, Tests


Description

  • Prevents duplicate Task and Response display with memory-enabled agents.

    • Adds suppress_display parameter to LLM.get_response() (default: False).
    • Updates all LLM display calls to respect suppress_display.
    • Agent now handles display centrally, passing suppress_display=True to LLM.
  • Adds regression test to verify duplicate display fix.


Changes walkthrough 📝

Relevant files
Bug fix
llm.py
Add suppress_display to LLM.get_response and update display logic

src/praisonai-agents/praisonaiagents/llm/llm.py

  • Adds suppress_display parameter to get_response().
  • Updates all display_interaction calls to check not suppress_display.
  • Ensures backward compatibility by defaulting suppress_display to
    False.
  • +7/-6     
    agent.py
    Centralize display in Agent and suppress LLM duplicate output

    src/praisonai-agents/praisonaiagents/agent/agent.py

  • Passes suppress_display=True to LLM when agent manages display.
  • Centralizes display logic in Agent when using custom LLM.
  • Prevents duplicate output in verbose mode.
  • +7/-1     
    Tests
    test_duplicate_fix.py
    Add regression test for duplicate display issue                   

    test_duplicate_fix.py

  • Adds test to verify duplicate display issue is fixed.
  • Sets up agent, task, and memory-enabled agents for regression test.
  • Prints test status for manual verification.
  • +37/-0   

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • Bug Fixes
      • Resolved an issue where chat responses were displayed twice when verbose mode was enabled, ensuring output is shown only once.
    • Tests
      • Added a new test script to verify that duplicate display of chat responses no longer occurs.

    …ith memory
    
    - Add suppress_display parameter to LLM.get_response() method (default: False)
    - Update all display_interaction calls in LLM class to respect suppress_display flag
    - Agent class now passes suppress_display=True to prevent LLM duplicate displays
    - Agent handles display centrally to ensure single display per interaction
    - Maintains full backward compatibility while fixing issue #612
    
    Co-authored-by: MervinPraison <MervinPraison@users.noreply.github.com>
    @coderabbitai
    Copy link
    Contributor

    coderabbitai bot commented Jun 6, 2025

    Caution

    Review failed

    The pull request is closed.

    """

    Walkthrough

    The changes introduce a suppress_display parameter to the LLM's get_response method, allowing the agent to control when outputs are displayed. The agent now explicitly handles display of interactions when verbose mode is enabled, preventing duplicate output. A new test script verifies that duplicate display issues are resolved.

    Changes

    File(s) Change Summary
    src/praisonai-agents/praisonaiagents/agent/agent.py Modified the chat method to pass suppress_display=True to LLM and handle display internally.
    src/praisonai-agents/praisonaiagents/llm/llm.py Added suppress_display parameter to get_response, gating display calls based on this flag.
    test_duplicate_fix.py Added a new test script to verify the fix for duplicate display outputs.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant Agent
        participant LLM
    
        User->>Agent: chat(prompt, ...)
        Agent->>LLM: get_response(prompt, ..., suppress_display=True)
        LLM-->>Agent: response_text
        Agent->>Agent: Append response to history
        Agent->>Agent: If verbose, display_interaction()
        Agent-->>User: response_text
    
    Loading

    Assessment against linked issues

    Objective Addressed Explanation
    Prevent duplicate Task and Response outputs when using litellm with memory (#612)
    Ensure fix is minimal and targeted, preserving intended functionality and debug info (#612)
    Adjust logging/output in litellm/llm.py to prevent double emission, not general agent/task changes (#612)
    Ensure Task and Response are printed only once per invocation (#612)

    Suggested labels

    codex

    Poem

    In the warren of code where the agents reside,
    Duplicate echoes would once coincide.
    With a clever new flag, the chatter is neat—
    Each response now lands on just one rabbit’s feet.
    No more repeats, just a single display,
    The agents hop forward, in a streamlined way! 🐇✨
    """


    📜 Recent review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 3720cb7 and 3038178.

    📒 Files selected for processing (3)
    • src/praisonai-agents/praisonaiagents/agent/agent.py (1 hunks)
    • src/praisonai-agents/praisonaiagents/llm/llm.py (5 hunks)
    • test_duplicate_fix.py (1 hunks)
    ✨ Finishing Touches
    • 📝 Generate Docstrings

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    @MervinPraison MervinPraison merged commit b924a78 into main Jun 6, 2025
    6 of 9 checks passed
    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    Summary of Changes

    Hello! Gemini here, providing a summary of this pull request. This PR addresses an issue where using LiteLLM with memory in PraisonAI agents resulted in duplicate displays of the Task and Response interactions. The root cause was identified as both the internal LLM class methods and the Agent class attempting to handle the display of interactions, leading to redundancy when a custom LLM instance was used. The solution implemented is a minimal fix that maintains backward compatibility by introducing a suppress_display parameter to the LLM.get_response method. This parameter is used internally within the LLM class to prevent its own display calls when suppress_display is set to True. The Agent's chat method is updated to call get_response with suppress_display=True and then handles the display centrally, ensuring each interaction is shown only once.

    Highlights

    • Fix Duplicate Display: Resolves the issue of duplicate Task and Response output when using LiteLLM with memory by centralizing display logic in the Agent.
    • Add suppress_display Parameter: Introduces a new boolean parameter suppress_display (defaulting to False) to the LLM.get_response method to control internal display behavior.
    • Centralize Agent Display: Modifies the Agent's chat method to call the LLM with suppress_display=True and adds a dedicated display_interaction call within the Agent to ensure single, consistent output.
    • Backward Compatibility: The changes are backward compatible as the new parameter defaults to False, not affecting existing LLM usages that don't explicitly set it.

    Changelog

    • src/praisonai-agents/praisonaiagents/agent/agent.py
      • Modified the chat method to pass suppress_display=True when calling self.llm.get_response (line 1112).
      • Added a new block to call display_interaction within the Agent's chat method when self.verbose is true, handling the display centrally after suppressing the LLM's internal display (lines 1118-1121).
    • src/praisonai-agents/praisonaiagents/llm/llm.py
      • Added the suppress_display: bool = False parameter to the get_response method signature (line 293).
      • Updated multiple internal calls to display_interaction within get_response to check verbose and not suppress_display before displaying (lines 458, 466, 669, 677, 722, 745).
    • test_duplicate_fix.py
      • Added a new test file to specifically verify that the duplicate display issue is fixed when using an Agent with a custom LLM configuration and memory enabled.
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-code-review
    Copy link

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Conditional Logic

    The modified display logic uses elif verbose and not suppress_display which assumes the first condition includes the verbose check. This could lead to unexpected behavior if the reasoning_content check passes but verbose is False.

    if verbose and not suppress_display and reasoning_content:
        display_interaction(
            original_prompt,
            f"Reasoning:\n{reasoning_content}\n\nAnswer:\n{response_text}",
            markdown=markdown,
            generation_time=time.time() - current_time,
            console=console
        )
    elif verbose and not suppress_display:
        display_interaction(

    @qodo-code-review
    Copy link

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Include missing reasoning content

    The display_interaction call is missing the reasoning_content parameter which
    could lead to losing reasoning information when the agent handles display. Add
    the reasoning_content parameter to ensure complete information is displayed.

    src/praisonai-agents/praisonaiagents/agent/agent.py [1118-1121]

     # Display interaction for custom LLM (since we suppressed LLM display)
     if self.verbose:
    +    reasoning_content = getattr(response, 'reasoning_content', None)
         display_interaction(prompt, response_text, markdown=self.markdown, 
    -                     generation_time=time.time() - start_time, console=self.console)
    +                     generation_time=time.time() - start_time, console=self.console,
    +                     reasoning_content=reasoning_content)
    • Apply / Chat
    Suggestion importance[1-10]: 2

    __

    Why: While the suggestion identifies a potential issue with losing reasoning content, the proposed solution is incorrect. The response object with reasoning_content attribute doesn't exist in this context since get_response() returns a string, not an object.

    Low
    • More

    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    This pull request effectively addresses the duplicate display issue when using litellm with memory. The introduction of the suppress_display flag in the LLM class and centralizing display logic in the Agent class is a clean and well-thought-out solution. The changes maintain backward compatibility, which is excellent.

    The core logic for suppressing and handling display appears correct and aligns well with the problem description. The addition of a regression test is a good step, though it could be enhanced for better automation.

    Summary of Findings

    • Test Automation: The new test test_duplicate_fix.py relies on manual observation. It should be converted into an automated test that asserts the correct display behavior, for example, by mocking display_interaction and checking its call count. (Commented)
    • Test Verbosity Setting: The test test_duplicate_fix.py may not trigger the intended display logic because the agent's verbose attribute is not explicitly set to a positive value. This is crucial for testing the scenario where the agent handles the display. (Commented as part of Test Automation)
    • Generation Time Calculation Nuance: The generation_time displayed by the Agent class might be slightly different from what the LLM class would have displayed. The agent's calculation includes more overhead from the chat method's start. This is a minor detail and likely acceptable for display. (Not commented due to review settings - low severity)
    • PEP 8: Missing Newline at End of File: The new file test_duplicate_fix.py is missing a newline character at the end of the file. (Not commented due to review settings - low severity)
    • PEP 8: Constant Naming: In test_duplicate_fix.py, llm_config is a module-level constant and should ideally be named LLM_CONFIG according to PEP 8. (Not commented due to review settings - low severity)

    Merge Readiness

    The core fix in this pull request is well-implemented and effectively solves the duplicate display problem. However, the associated regression test (test_duplicate_fix.py) currently relies on manual verification. To ensure long-term maintainability and catch future regressions automatically, this test should be updated to include automated assertions. I have provided suggestions on how to achieve this.

    I recommend addressing the test automation aspect before merging. As a reviewer, I am not authorized to approve pull requests, so please ensure other necessary reviews and approvals are obtained.

    Comment on lines +35 to +37
    print("Testing fix for duplicate display issue...")
    result = agents.start()
    print("Test completed successfully!") No newline at end of file
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    This test is a good start for verifying the fix, but it currently relies on manual observation of the output. For a robust regression test, it would be beneficial to automate the verification.

    Consider the following improvements:

    1. Automated Assertion: Instead of print statements for manual verification, the test should assert that the display interaction occurs the correct number of times (e.g., once per task interaction in verbose mode).
      • You could use unittest.mock.patch to mock src.praisonaiagents.main.display_interaction and then assert its call_count.
      • Alternatively, if using a test runner like pytest, you could use the capsys fixture to capture stdout and assert its content.
    2. Verbose Mode Activation: The agent-level display (handled in agent.py) and the LLM-level display (handled in llm.py) are conditional on verbosity settings. To properly test the scenario where the agent displays and the LLM is suppressed, ensure that the agent's verbose mode is active.
      • Currently, blog_agent.verbose will be 0 by default. You might need to set blog_agent.verbose = 1 (or another positive integer) in the test setup for the agent-level display_interaction to be called.

    An automated test will provide more confidence in CI/CD pipelines that this bug, once fixed, does not reappear.

    Example using unittest.mock:

    import unittest
    from unittest.mock import patch
    # ... other imports from your test file ...
    
    # Ensure blog_agent is set to be verbose for the agent-level display to trigger
    blog_agent = Agent(
        role="Blog Writer",
        goal="Write a blog post about AI",
        backstory="Expert at writing blog posts",
        llm=llm_config,
        verbose=1  # Activate verbose mode for the agent
    )
    
    # ... (rest of your setup for blog_task, agents) ...
    
    class TestDuplicateDisplay(unittest.TestCase):
        @patch('src.praisonaiagents.main.display_interaction')
        def test_no_duplicate_display_with_memory(self, mock_display_interaction):
            # Re-initialize agents and tasks here if they are not module-level
            # or pass the pre-configured blog_agent and blog_task
            current_agents = PraisonAIAgents(
                agents=[blog_agent], # Use the verbose agent
                tasks=[blog_task],
                memory=True
            )
            print("Testing fix for duplicate display issue...")
            result = current_agents.start()
            print("Test completed.")
            
            # Assert that display_interaction was called (e.g., once for a single task)
            self.assertEqual(mock_display_interaction.call_count, 1)
            # You could add more specific assertions about the arguments if needed
    
    if __name__ == '__main__':
        unittest.main()

    This would replace the current print-based execution at the end of your file.

    cursor[bot]

    This comment was marked as outdated.

    @MervinPraison
    Copy link
    Owner Author

    bugbot run verbose=True

    @cursor
    Copy link

    cursor bot commented Jun 6, 2025

    bugbot is starting with request id serverGenReqId_b824bb70-e926-43b8-bfcc-ad472114db24

    Copy link

    @cursor cursor bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Bug: Async Method Missing Display Suppression

    The suppress_display parameter was added to the synchronous LLM.get_response method but is missing from the asynchronous LLM.get_response_async method. This creates inconsistent display behavior between synchronous (agent.chat) and asynchronous (agent.achat) agent calls. The fix for duplicate displays, which involves the agent suppressing the LLM's display and handling it centrally, only works for synchronous operations. Asynchronous agent calls using get_response_async cannot suppress the LLM's display, leading to duplicate displays in async scenarios.

    src/praisonai-agents/praisonaiagents/llm/llm.py#L292-L293

    execute_tool_fn: Optional[Callable] = None,
    suppress_display: bool = False,

    src/praisonai-agents/praisonaiagents/agent/agent.py#L1111-L1121

    reasoning_steps=reasoning_steps,
    suppress_display=True # Prevent duplicate displays - Agent will handle display
    )
    self.chat_history.append({"role": "user", "content": prompt})
    self.chat_history.append({"role": "assistant", "content": response_text})
    # Display interaction for custom LLM (since we suppressed LLM display)
    if self.verbose:
    display_interaction(prompt, response_text, markdown=self.markdown,
    generation_time=time.time() - start_time, console=self.console)

    Fix in Cursor


    BugBot free trial expires on June 11, 2025
    You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.

    Was this report helpful? Give feedback by reacting with 👍 or 👎

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    duplicate response when using litellm with memory

    2 participants