Skip to content

Conversation

@MervinPraison
Copy link
Owner

@MervinPraison MervinPraison commented Jun 4, 2025

PR Type

Enhancement, Tests


Description

  • Refactor agent tool execution for sequential thinking support

    • Add iterative tool call loop with max iterations
    • Enable conditional continuation for sequential tools
  • Migrate test workflows to use pytest and coverage reporting

    • Replace custom test runner with pytest in workflows
    • Add Codecov upload steps for all test types
    • Improve artifact paths and coverage report handling
  • Add dedicated coverage workflow for unit tests


Changes walkthrough 📝

Relevant files
Enhancement
agent.py
Refactor agent tool execution for sequential/sequentialthinking tools

src/praisonai-agents/praisonaiagents/agent/agent.py

  • Refactored tool execution to support iterative/sequential tool calls
  • Introduced loop with max iterations to prevent infinite loops
  • Added logic to check for continuation based on tool call arguments
  • Improved handling for tools like sequential thinking
  • +80/-57 
    Tests
    coverage.yml
    Add dedicated coverage workflow with Codecov upload           

    .github/workflows/coverage.yml

  • Added new workflow to run unit tests with pytest and collect coverage
  • Installs all dependencies including knowledge and duckduckgo_search
  • Uploads coverage results to Codecov
  • +60/-0   
    test-comprehensive.yml
    Use pytest and upload coverage for comprehensive tests     

    .github/workflows/test-comprehensive.yml

  • Switched unit/integration test execution to pytest with coverage
    reporting
  • Added Codecov upload step for comprehensive tests
  • Updated artifact paths for coverage files
  • +15/-5   
    test-core.yml
    Add Codecov upload and improve coverage for core tests     

    .github/workflows/test-core.yml

  • Enhanced unit test step to generate XML coverage reports
  • Added Codecov upload for core tests (Python 3.11)
  • Updated artifact paths for coverage files
  • +16/-4   
    unittest.yml
    Use pytest and upload coverage for quick validation tests

    .github/workflows/unittest.yml

  • Changed fast test execution to pytest with coverage reporting
  • Added Codecov upload step for quick validation tests
  • +12/-2   

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • Chores
      • Introduced and updated multiple GitHub Actions workflows to improve automated test execution and code coverage reporting, including integration with Codecov for enhanced visibility of test coverage.
    • Bug Fixes
      • Improved multi-step tool execution in chat completions, enabling more robust and iterative tool call handling during conversations.

    @netlify
    Copy link

    netlify bot commented Jun 4, 2025

    Deploy Preview for praisonai failed.

    Name Link
    🔨 Latest commit 13503f0
    🔍 Latest deploy log https://app.netlify.com/projects/praisonai/deploys/6840291fa3327800084656fa

    @coderabbitai
    Copy link
    Contributor

    coderabbitai bot commented Jun 4, 2025

    Caution

    Review failed

    The pull request is closed.

    Walkthrough

    This update adds a new GitHub Actions workflow for running tests and uploading coverage reports, enhances existing workflows to use pytest with coverage and Codecov uploads, and modifies the _chat_completion method in the agent to iteratively handle multi-step tool calls with loop control.

    Changes

    Files Change Summary
    .github/workflows/coverage.yml Added new workflow for running tests and uploading coverage with environment setup and Codecov integration.
    .github/workflows/test-comprehensive.yml, .github/workflows/test-core.yml, .github/workflows/unittest.yml Modified test steps to run pytest directly with coverage, updated artifact paths, and added Codecov upload steps.
    src/praisonai-agents/praisonaiagents/agent/agent.py Changed _chat_completion method to iteratively process tool calls up to 10 loops, supporting multi-step workflows and preventing infinite loops.

    Possibly related PRs

    • Fix redundant tool invocation #594: Refactors tool call handling by removing redundant processing in the chat method, complementing the iterative tool call logic added in _chat_completion.
    • Develop #507: Adds default fallback values for environment variables in GitHub Actions workflows, related to the environment setup in the new coverage workflow.

    Suggested labels

    Review effort 2/5, codex

    Poem

    🐇 In code I hop and swiftly weave,
    Tests and coverage now achieve.
    Tools call back, again, anew,
    Ten loops to think and see it through.
    CI runs with shining light,
    Codecov's reports in clear sight.
    A rabbit’s cheer for changes bright! 🌟🐰


    📜 Recent review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 90ebfdd and 13503f0.

    📒 Files selected for processing (5)
    • .github/workflows/coverage.yml (1 hunks)
    • .github/workflows/test-comprehensive.yml (2 hunks)
    • .github/workflows/test-core.yml (2 hunks)
    • .github/workflows/unittest.yml (2 hunks)
    • src/praisonai-agents/praisonaiagents/agent/agent.py (1 hunks)
    ✨ Finishing Touches
    • 📝 Generate Docstrings

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    Summary of Changes

    Hello team, gemini-code-assist here to provide a summary of this pull request. Based on the changes in the src/praisonai-agents/praisonaiagents/agent/agent.py file, this PR introduces support for multi-turn tool execution within a single _chat_completion call. Previously, the agent would call tools once and then potentially generate a final response. With this change, the agent can now enter a loop, executing tools and potentially receiving new instructions or tool calls from the language model based on the results of the previous tool executions. This enables more complex, sequential reasoning processes facilitated by tools, particularly driven by a check for a specific condition from a 'sequentialthinking' tool.

    Highlights

    • Multi-Turn Tool Execution: The core change enables the agent to perform multiple rounds of tool calls and subsequent LLM interactions within a single _chat_completion request, moving beyond a single tool use phase.
    • Sequential Thinking Support: A mechanism is added to check the output of a tool (specifically looking for a 'sequentialthinking' tool call with nextThoughtNeeded: True) to determine if the multi-turn loop should continue.
    • Loop Safeguard: A max_iterations limit (set to 10) is introduced to prevent potential infinite loops during the multi-turn tool execution process.

    Changelog

    • src/praisonai-agents/praisonaiagents/agent/agent.py
      • Wrapped the tool execution logic within a while loop to allow for multiple iterations.
      • Added a max_iterations variable (defaulting to 10) to limit the number of loop cycles.
      • Implemented logic to check the results of tool calls (specifically the 'sequentialthinking' tool) to decide whether to continue the multi-turn loop.
      • Ensured that the message history is updated correctly with assistant messages and tool results in each iteration.
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


    A loop for the tool,
    Thinking step by step, cool,
    Ten tries, then it's through.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Jun 4, 2025

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Infinite Loop Risk

    The sequential thinking tool continuation logic may not handle all edge cases. If a tool other than "sequentialthinking" needs multiple iterations, the current implementation might not support it properly.

    should_continue = False
    for tool_call in tool_calls:
        function_name = tool_call.function.name
        arguments = json.loads(tool_call.function.arguments)
    
        # For sequential thinking tool, check if nextThoughtNeeded is True
        if function_name == "sequentialthinking" and arguments.get("nextThoughtNeeded", False):
            should_continue = True
            break
    
    if not should_continue:
    Duplicate Code

    There's duplicate code for creating final responses in both the main loop and after tool execution. This could be refactored into a helper method to improve maintainability.

        # Process as streaming response with formatted tools
        final_response = self._process_stream_response(
            messages, 
            temperature, 
            start_time, 
            formatted_tools=formatted_tools if formatted_tools else None,
            reasoning_steps=reasoning_steps
        )
    else:
        # Process as regular non-streaming response
        final_response = client.chat.completions.create(
            model=self.llm,
            messages=messages,
            temperature=temperature,
            tools=formatted_tools if formatted_tools else None,
            stream=False
        )

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Jun 4, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Improve sequential tool execution

    The current implementation only checks for continuation with the
    "sequentialthinking" tool. Implement a more generic approach that allows any
    tool to signal continuation by adding a standardized parameter in the tool
    response or checking a predefined list of tools that support sequential
    execution.

    src/praisonai-agents/praisonaiagents/agent/agent.py [843-852]

     # Check if we should continue (for tools like sequential thinking)
     should_continue = False
    +sequential_tools = ["sequentialthinking", "iterative_reasoning", "step_by_step_solver"]
     for tool_call in tool_calls:
         function_name = tool_call.function.name
         arguments = json.loads(tool_call.function.arguments)
         
    -    # For sequential thinking tool, check if nextThoughtNeeded is True
    -    if function_name == "sequentialthinking" and arguments.get("nextThoughtNeeded", False):
    +    # Check if tool is a sequential tool and needs to continue
    +    if function_name in sequential_tools and arguments.get("nextThoughtNeeded", False):
    +        should_continue = True
    +        break
    +    # Generic continuation check for any tool
    +    elif arguments.get("continue_execution", False):
             should_continue = True
             break
    • Apply / Chat
    Suggestion importance[1-10]: 6

    __

    Why: The suggestion improves code maintainability by making the sequential tool detection more generic and extensible. It adds support for multiple sequential tools and a generic continuation mechanism, which is a good architectural improvement but not critical.

    Low
    Add iteration limit logging

    Add a logging mechanism to track the number of iterations and warn when
    approaching the maximum limit. This helps with debugging and provides visibility
    into potential infinite loop scenarios.

    src/praisonai-agents/praisonaiagents/agent/agent.py [791-795]

     # Continue tool execution loop until no more tool calls are needed
     max_iterations = 10  # Prevent infinite loops
     iteration_count = 0
     
     while iteration_count < max_iterations:
    +    if iteration_count >= max_iterations - 2:
    +        logger.warning(f"Agent {self.name} approaching max iterations ({iteration_count}/{max_iterations})")
    • Apply / Chat
    Suggestion importance[1-10]: 4

    __

    Why: The suggestion adds useful debugging visibility for loop iterations. However, it has implementation issues (undefined logger) and would trigger warnings multiple times in the last iterations, making it somewhat noisy.

    Low
    • More

    @MervinPraison MervinPraison merged commit ec93f96 into main Jun 4, 2025
    10 of 18 checks passed
    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    This pull request significantly refactors the agent's tool execution to support sequential tool calls, which is a valuable enhancement. The introduction of an iterative loop with max_iterations and conditional continuation logic is well-implemented for the most part. The updates to CI workflows to use pytest and Codecov are also great improvements for testing and code quality monitoring.

    I've found one high-severity issue related to how responses are handled when max_iterations is reached, and a few medium-severity suggestions for improving maintainability and correctness. Please see the detailed comments below.

    Summary of Findings

    • Stale Response on Max Iterations: If the tool execution loop reaches max_iterations, the agent might return a response that doesn't incorporate the results from the very last set of tool calls. A final LLM call is needed in this scenario. (Severity: high)
    • Hardcoded Configuration Values: The max_iterations limit (10) is hardcoded. Making this configurable or a class constant would improve maintainability. (Severity: medium)
    • Magic Strings for Tool Identification: The tool name "sequentialthinking" is used as a magic string. Defining it as a constant would enhance readability and robustness. (Severity: medium)
    • Tool Usage in Final Response (Streaming): When a tool sequence concludes (not due to max_iterations) and streaming is enabled, new tools could still be invoked during the final response generation. This might be unintended. (Severity: medium)
    • Redundant JSON Parsing (Not Commented): Tool call arguments (tool_call.function.arguments) are parsed into JSON twice within the same loop iteration for each tool call. This is a minor inefficiency. (Severity: low - not commented due to review settings)

    Merge Readiness

    The pull request introduces valuable features for sequential tool execution. However, there is a high-severity issue concerning potentially stale responses when max_iterations is reached, which should be addressed before merging. The medium-severity suggestions would also improve the code's robustness and maintainability.

    As an AI, I am not authorized to approve pull requests. Please ensure thorough testing and further review by team members before merging these changes.

    # No tool calls, we're done
    break

    return final_response
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    If the while loop terminates because iteration_count reaches max_iterations (and should_continue was true in the last iteration), the final_response returned will be from the LLM call before the results of the last set of tool calls were processed by the LLM. The messages list will contain these latest tool results, but the LLM hasn't generated a response based on them.

    This could lead to the agent returning a stale response that doesn't reflect the outcome of the final tool operations.

    Consider adding logic before this return statement to make a final LLM call if the loop exited due to max_iterations and tools were processed in that last iteration. This final call should likely not allow further tool usage (i.e., tools=None).

                # If the loop terminated due to max_iterations and the last LLM call requested tools
                # (implying those tools were run and their results are in 'messages' but not yet processed by LLM for a final response).
                if iteration_count == max_iterations and getattr(final_response.choices[0].message, 'tool_calls', None):
                    logging.warning(f"Agent {self.name} reached max_iterations ({max_iterations}). Generating final response based on accumulated tool results.")
                    if stream:
                        final_response = self._process_stream_response(
                            messages, 
                            temperature, 
                            start_time,
                            formatted_tools=None, # No further tools when max_iterations is hit
                            reasoning_steps=reasoning_steps
                        )
                    else:
                        final_response = client.chat.completions.create(
                            model=self.llm,
                            messages=messages,
                            temperature=temperature,
                            tools=None, # No further tools when max_iterations is hit
                            stream=False
                        )
                return final_response

    "tool_calls": tool_calls
    })
    # Continue tool execution loop until no more tool calls are needed
    max_iterations = 10 # Prevent infinite loops
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The max_iterations value is currently hardcoded as 10. For better maintainability and configurability, would it be beneficial to define this as a class attribute (e.g., self.MAX_TOOL_ITERATIONS) or make it configurable, perhaps through the agent's initialization parameters? This would make it easier to adjust this limit without modifying the method's core logic.

    Suggested change
    max_iterations = 10 # Prevent infinite loops
    max_iterations = self.config.get('max_tool_iterations', 10) # Or use a class constant

    arguments = json.loads(tool_call.function.arguments)

    # For sequential thinking tool, check if nextThoughtNeeded is True
    if function_name == "sequentialthinking" and arguments.get("nextThoughtNeeded", False):
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The string "sequentialthinking" is used directly here to identify the tool. To improve code clarity, reduce the risk of typos, and make future refactoring easier, could this be defined as a named constant at the class or module level (e.g., SEQUENTIAL_THINKING_TOOL_NAME = "sequentialthinking")?

    Suggested change
    if function_name == "sequentialthinking" and arguments.get("nextThoughtNeeded", False):
    if function_name == SEQUENTIAL_THINKING_TOOL_NAME and arguments.get("nextThoughtNeeded", False):

    Comment on lines +857 to +863
    final_response = self._process_stream_response(
    messages,
    temperature,
    start_time,
    formatted_tools=formatted_tools if formatted_tools else None,
    reasoning_steps=reasoning_steps
    )
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    When should_continue is False, it implies the tool execution sequence (e.g., sequential thinking) has decided to conclude. However, in the streaming case, formatted_tools are still passed to _process_stream_response. Could this potentially lead to the LLM initiating new, unexpected tool calls even though the sequence was intended to stop? The non-streaming counterpart (lines 865-870) correctly omits tools for what appears to be the final response generation. For consistency and to ensure the sequence truly concludes, perhaps formatted_tools should be None here as well?

    Suggested change
    final_response = self._process_stream_response(
    messages,
    temperature,
    start_time,
    formatted_tools=formatted_tools if formatted_tools else None,
    reasoning_steps=reasoning_steps
    )
    final_response = self._process_stream_response(
    messages,
    temperature,
    start_time,
    formatted_tools=None, # Explicitly no tools for final response generation
    reasoning_steps=reasoning_steps
    )

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants