Skip to content

Conversation

@MervinPraison
Copy link
Owner

@MervinPraison MervinPraison commented Jul 12, 2025

Fixes #845

Description

This PR fixes the sequential tool execution issue in the OpenAI client where only the first tool would be executed and subsequent tools would be skipped.

Changes

  • Remove premature loop termination after tool execution in chat_completion_with_tools
  • Remove premature loop termination after tool execution in achat_completion_with_tools
  • Allow model to see tool results and call additional tools as needed
  • Matches behavior in llm.py for consistent sequential execution

Testing

The fix allows the example code to work correctly:

  • First tool call: get_stock_price("Google") returns "100"
  • Second tool call: multiply(100, 2) returns "200"

Generated with Claude Code

Here is the working code for reference this is for litellm client,


❯ python gemini-sequential.py
19:24:08 - LiteLLM:DEBUG: litellm_logging.py:141 - [Non-Blocking] Unable to import GenericAPILogger - LiteLLM Enterprise Feature - No module named 'litellm.proxy.enterprise'
[19:24:08] DEBUG    [19:24:08] litellm_logging.py:141 DEBUG [Non-Blocking]   litellm_logging.py:141
                    Unable to import GenericAPILogger - LiteLLM Enterprise                         
                    Feature - No module named 'litellm.proxy.enterprise'                           
[19:24:09] DEBUG    [19:24:09] telemetry.py:81 DEBUG Telemetry enabled with session telemetry.py:81
                    27bae6936a4eea15                                                               
           DEBUG    [19:24:09] llm.py:141 DEBUG LLM instance initialized with: {         llm.py:141
                      "model": "gemini/gemini-2.5-pro",                                            
                      "timeout": null,                                                             
                      "temperature": null,                                                         
                      "top_p": null,                                                               
                      "n": null,                                                                   
                      "max_tokens": null,                                                          
                      "presence_penalty": null,                                                    
                      "frequency_penalty": null,                                                   
                      "logit_bias": null,                                                          
                      "response_format": null,                                                     
                      "seed": null,                                                                
                      "logprobs": null,                                                            
                      "top_logprobs": null,                                                        
                      "api_version": null,                                                         
                      "stop_phrases": null,                                                        
                      "api_key": null,                                                             
                      "base_url": null,                                                            
                      "verbose": true,                                                             
                      "markdown": true,                                                            
                      "self_reflect": false,                                                       
                      "max_reflect": 3,                                                            
                      "min_reflect": 1,                                                            
                      "reasoning_steps": false,                                                    
                      "extra_settings": {}                                                         
                    }                                                                              
           DEBUG    [19:24:09] agent.py:416 DEBUG Tools passed to Agent with custom    agent.py:416
                    LLM: [<function get_stock_price at 0x100d84ae0>, <function                     
                    multiply at 0x100f116c0>]                                                      
           DEBUG    [19:24:09] agent.py:1160 DEBUG Agent.chat parameters: {           agent.py:1160
                      "prompt": "what is the stock price of Google? multiply the                   
                    Google stock price with 2",                                                    
                      "temperature": 0.2,                                                          
                      "tools": null,                                                               
                      "output_json": null,                                                         
                      "output_pydantic": null,                                                     
                      "reasoning_steps": false,                                                    
                      "agent_name": "Agent",                                                       
                      "agent_role": "Assistant",                                                   
                      "agent_goal": "You are a helpful assistant. You can use the                  
                    tools provided to you to help the user."                                       
                    }                                                                              
           INFO     [19:24:09] llm.py:593 INFO Getting response from                     llm.py:593
                    gemini/gemini-2.5-pro                                                          
           DEBUG    [19:24:09] llm.py:147 DEBUG LLM instance configuration: {            llm.py:147
                      "model": "gemini/gemini-2.5-pro",                                            
                      "timeout": null,                                                             
                      "temperature": null,                                                         
                      "top_p": null,                                                               
                      "n": null,                                                                   
                      "max_tokens": null,                                                          
                      "presence_penalty": null,                                                    
                      "frequency_penalty": null,                                                   
                      "logit_bias": null,                                                          
                      "response_format": null,                                                     
                      "seed": null,                                                                
                      "logprobs": null,                                                            
                      "top_logprobs": null,                                                        
                      "api_version": null,                                                         
                      "stop_phrases": null,                                                        
                      "api_key": null,                                                             
                      "base_url": null,                                                            
                      "verbose": true,                                                             
                      "markdown": true,                                                            
                      "self_reflect": false,                                                       
                      "max_reflect": 3,                                                            
                      "min_reflect": 1,                                                            
                      "reasoning_steps": false                                                     
                    }                                                                              
           DEBUG    [19:24:09] llm.py:143 DEBUG get_response parameters: {               llm.py:143
                      "prompt": "what is the stock price of Google? multiply the Google            
                    stock price with 2",                                                           
                      "system_prompt": "You are a helpful assistant. You can use the               
                    tools provided to you to help the user.\n\nYour Role: Ass...",                 
                      "chat_history": "[1 messages]",                                              
                      "temperature": 0.2,                                                          
                      "tools": [                                                                   
                        "get_stock_price",                                                         
                        "multiply"                                                                 
                      ],                                                                           
                      "output_json": null,                                                         
                      "output_pydantic": null,                                                     
                      "verbose": true,                                                             
                      "markdown": true,                                                            
                      "self_reflect": false,                                                       
                      "max_reflect": 3,                                                            
                      "min_reflect": 1,                                                            
                      "agent_name": "Agent",                                                       
                      "agent_role": "Assistant",                                                   
                      "agent_tools": [                                                             
                        "get_stock_price",                                                         
                        "multiply"                                                                 
                      ],                                                                           
                      "kwargs": "{'reasoning_steps': False}"                                       
                    }                                                                              
           DEBUG    [19:24:09] llm.py:2180 DEBUG Generating tool definition for         llm.py:2180
                    callable: get_stock_price                                                      
           DEBUG    [19:24:09] llm.py:2225 DEBUG Function signature: (company_name:     llm.py:2225
                    str) -> str                                                                    
           DEBUG    [19:24:09] llm.py:2244 DEBUG Function docstring: Get the stock      llm.py:2244
                    price of a company                                                             
                                                                                                   
                    Args:                                                                          
                        company_name (str): The name of the company                                
                                                                                                   
                    Returns:                                                                       
                        str: The stock price of the company                                        
           DEBUG    [19:24:09] llm.py:2250 DEBUG Param section split: ['Get the stock   llm.py:2250
                    price of a company', 'company_name (str): The name of the company\n            
                    \nReturns:\n    str: The stock price of the company']                          
           DEBUG    [19:24:09] llm.py:2259 DEBUG Parameter descriptions: {'company_name llm.py:2259
                    (str)': 'The name of the company', 'Returns': '', 'str': 'The stock            
                    price of the company'}                                                         
           DEBUG    [19:24:09] llm.py:2283 DEBUG Generated parameters: {'type':         llm.py:2283
                    'object', 'properties': {'company_name': {'type': 'string',                    
                    'description': 'Parameter description not available'}}, 'required':            
                    ['company_name']}                                                              
           DEBUG    [19:24:09] llm.py:2292 DEBUG Generated tool definition: {'type':    llm.py:2292
                    'function', 'function': {'name': 'get_stock_price', 'description':             
                    'Get the stock price of a company', 'parameters': {'type':                     
                    'object', 'properties': {'company_name': {'type': 'string',                    
                    'description': 'Parameter description not available'}}, 'required':            
                    ['company_name']}}}                                                            
           DEBUG    [19:24:09] llm.py:2180 DEBUG Generating tool definition for         llm.py:2180
                    callable: multiply                                                             
           DEBUG    [19:24:09] llm.py:2225 DEBUG Function signature: (a: int, b: int)   llm.py:2225
                    -> int                                                                         
           DEBUG    [19:24:09] llm.py:2244 DEBUG Function docstring: Multiply two       llm.py:2244
                    numbers                                                                        
           DEBUG    [19:24:09] llm.py:2250 DEBUG Param section split: ['Multiply two    llm.py:2250
                    numbers']                                                                      
           DEBUG    [19:24:09] llm.py:2259 DEBUG Parameter descriptions: {}             llm.py:2259
           DEBUG    [19:24:09] llm.py:2283 DEBUG Generated parameters: {'type':         llm.py:2283
                    'object', 'properties': {'a': {'type': 'integer', 'description':               
                    'Parameter description not available'}, 'b': {'type': 'integer',               
                    'description': 'Parameter description not available'}}, 'required':            
                    ['a', 'b']}                                                                    
           DEBUG    [19:24:09] llm.py:2292 DEBUG Generated tool definition: {'type':    llm.py:2292
                    'function', 'function': {'name': 'multiply', 'description':                    
                    'Multiply two numbers', 'parameters': {'type': 'object',                       
                    'properties': {'a': {'type': 'integer', 'description': 'Parameter              
                    description not available'}, 'b': {'type': 'integer',                          
                    'description': 'Parameter description not available'}}, 'required':            
                    ['a', 'b']}}}                                                                  
╭─ Agent Info ────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                 │
│  👤 Agent: Agent                                                                                │
│  Role: Assistant                                                                                │
│  Tools: get_stock_price, multiply                                                               │
│                                                                                                 │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────────────────── Instruction ──────────────────────────────────────────╮
│ Agent Agent is processing prompt: what is the stock price of Google? multiply the Google stock  │
│ price with 2                                                                                    │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
           DEBUG    [19:24:09] main.py:206 DEBUG Empty content in display_generating,   main.py:206
                    returning early                                                                
/Users/praison/miniconda3/envs/praisonai-package/lib/python3.11/site-packages/httpx/_models.py:408:
DeprecationWarning: Use 'content=<...>' to upload raw bytes/text content.
  headers, stream = encode_request(
/Users/praison/miniconda3/envs/praisonai-package/lib/python3.11/site-packages/litellm/litellm_core_
utils/streaming_handler.py:1544: PydanticDeprecatedSince20: The `dict` method is deprecated; use 
`model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration 
Guide at https://errors.pydantic.dev/2.10/migration/
  obj_dict = response.dict()

[19:24:19] DEBUG    [19:24:19] llm.py:828 DEBUG [TOOL_EXEC_DEBUG] About to execute tool  llm.py:828
                    get_stock_price with args: {'company_name': 'Google'}                          
           DEBUG    [19:24:19] agent.py:946 DEBUG Agent executing tool get_stock_price agent.py:946
                    with arguments: {'company_name': 'Google'}                                     
           DEBUG    [19:24:19] telemetry.py:152 DEBUG Tool usage tracked:          telemetry.py:152
                    get_stock_price, success=True                                                  
           DEBUG    [19:24:19] llm.py:830 DEBUG [TOOL_EXEC_DEBUG] Tool execution result: llm.py:830
                    The stock price of Google is 100                                               
           DEBUG    [19:24:19] llm.py:837 DEBUG [TOOL_EXEC_DEBUG] Display message with   llm.py:837
                    result: Agent Agent called function 'get_stock_price' with                     
                    arguments: {'company_name': 'Google'}                                          
                    Function returned: The stock price of Google is 100                            
           DEBUG    [19:24:19] llm.py:842 DEBUG [TOOL_EXEC_DEBUG] About to display tool  llm.py:842
                    call with message: Agent Agent called function 'get_stock_price'               
                    with arguments: {'company_name': 'Google'}                                     
                    Function returned: The stock price of Google is 100                            
           DEBUG    [19:24:19] main.py:175 DEBUG display_tool_call called with message: main.py:175
                    "Agent Agent called function 'get_stock_price' with arguments:                 
                    {'company_name': 'Google'}\nFunction returned: The stock price of              
                    Google is 100"                                                                 
           DEBUG    [19:24:19] main.py:182 DEBUG Cleaned message in display_tool_call:  main.py:182
                    "Agent Agent called function 'get_stock_price' with arguments:                 
                    {'company_name': 'Google'}\nFunction returned: The stock price of              
                    Google is 100"                                                                 
╭─────────────────────────────────────── Tool Call ────────────────────────────────────────╮
│ Agent Agent called function 'get_stock_price' with arguments: {'company_name': 'Google'} │
│ Function returned: The stock price of Google is 100                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
           DEBUG    [19:24:19] main.py:206 DEBUG Empty content in display_generating,   main.py:206
                    returning early                                                                

[19:24:23] DEBUG    [19:24:23] llm.py:828 DEBUG [TOOL_EXEC_DEBUG] About to execute tool  llm.py:828
                    multiply with args: {'b': 2, 'a': 100}                                         
           DEBUG    [19:24:23] agent.py:946 DEBUG Agent executing tool multiply with   agent.py:946
                    arguments: {'b': 2, 'a': 100}                                                  
           DEBUG    [19:24:23] telemetry.py:152 DEBUG Tool usage tracked:          telemetry.py:152
                    multiply, success=True                                                         
           DEBUG    [19:24:23] llm.py:830 DEBUG [TOOL_EXEC_DEBUG] Tool execution result: llm.py:830
                    200                                                                            
           DEBUG    [19:24:23] llm.py:837 DEBUG [TOOL_EXEC_DEBUG] Display message with   llm.py:837
                    result: Agent Agent called function 'multiply' with arguments: {'b':           
                    2, 'a': 100}                                                                   
                    Function returned: 200                                                         
           DEBUG    [19:24:23] llm.py:842 DEBUG [TOOL_EXEC_DEBUG] About to display tool  llm.py:842
                    call with message: Agent Agent called function 'multiply' with                 
                    arguments: {'b': 2, 'a': 100}                                                  
                    Function returned: 200                                                         
           DEBUG    [19:24:23] main.py:175 DEBUG display_tool_call called with message: main.py:175
                    "Agent Agent called function 'multiply' with arguments: {'b': 2,               
                    'a': 100}\nFunction returned: 200"                                             
           DEBUG    [19:24:23] main.py:182 DEBUG Cleaned message in display_tool_call:  main.py:182
                    "Agent Agent called function 'multiply' with arguments: {'b': 2,               
                    'a': 100}\nFunction returned: 200"                                             
╭──────────────────────────────── Tool Call ────────────────────────────────╮
│ Agent Agent called function 'multiply' with arguments: {'b': 2, 'a': 100} │
│ Function returned: 200                                                    │
╰───────────────────────────────────────────────────────────────────────────╯
           DEBUG    [19:24:23] main.py:206 DEBUG Empty content in display_generating,   main.py:206
                    returning early                                                                
╭────────────────────────────────────── Generating... 5.4s ───────────────────────────────────────╮
│ The stock price of Google is 100 and after multiplying with 2 it is 200.                        │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
[19:24:28] DEBUG    [19:24:28] agent.py:1247 DEBUG Agent.chat completed in 19.26      agent.py:1247
                    seconds                                                                        
           DEBUG    [19:24:28] telemetry.py:121 DEBUG Agent execution tracked:     telemetry.py:121
                    success=True                                                                   
           DEBUG    [19:24:28] telemetry.py:121 DEBUG Agent execution tracked:     telemetry.py:121
                    success=True                                                                   
The stock price of Google is 100 and after multiplying with 2 it is 200.
[19:24:29] DEBUG    [19:24:29] telemetry.py:209 DEBUG Telemetry flush: {'enabled': telemetry.py:209
                    True, 'session_id': '27bae6936a4eea15', 'metrics':                             
                    {'agent_executions': 2, 'task_completions': 0, 'tool_calls':                   
                    2, 'errors': 0}, 'environment': {'python_version': '3.11.11',                  
                    'os_type': 'Darwin', 'framework_version': 'unknown'}}                          
                    
from praisonaiagents import Agent

def get_stock_price(company_name: str) -> str:
    """
    Get the stock price of a company
    
    Args:
        company_name (str): The name of the company
        
    Returns:
        str: The stock price of the company
    """
    return f"The stock price of {company_name} is 100"

def multiply(a: int, b: int) -> int:
    """
    Multiply two numbers
    """
    return a * b



agent = Agent(
    instructions="You are a helpful assistant. You can use the tools provided to you to help the user.",
    llm="gemini/gemini-2.5-pro",
    tools=[get_stock_price, multiply]
)

result = agent.start("what is the stock price of Google? multiply the Google stock price with 2")
print(result)

Also make sure no feature is removed

Summary by CodeRabbit

  • Refactor
    • Improved the handling of tool execution loops for AI agent interactions, allowing for smoother and more consistent processing of tool results and follow-up actions.

Removes premature loop termination after tool execution to allow
sequential tool calls. The model can now see tool results and make
additional tool calls as needed.

Fixes #845

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 12, 2025

Walkthrough

The update modifies the control flow within the OpenAI client’s tool-calling loop, removing the special-case logic for the "sequentialthinking" tool and the final completion retrieval after tool execution. Now, the loop always continues to allow further tool calls until none remain or a maximum iteration count is reached, streamlining sequential tool invocation.

Changes

File(s) Change Summary
src/praisonai-agents/praisonaiagents/llm/openai_client.py Removed conditional logic for "sequentialthinking" tool and final completion retrieval; simplified tool execution loop for both sync and async methods.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Agent
    participant OpenAIClient
    participant Tool

    User->>Agent: Submit prompt with possible tool calls
    Agent->>OpenAIClient: Request chat completion with tools
    loop Tool execution loop
        OpenAIClient->>Tool: Execute tool call(s)
        Tool-->>OpenAIClient: Return tool result(s)
        OpenAIClient->>OpenAIClient: Continue loop if tool calls remain
    end
    OpenAIClient-->>Agent: Return final response
    Agent-->>User: Respond with combined result
Loading

Assessment against linked issues

Objective Addressed Explanation
Enable sequential execution of multiple tool calls within an agent interaction (#845)
Modify OpenAI client integration in praisonaiagents to support sequential tool invocation (#845)
Remove early termination and final completion retrieval after tool execution (#845)
Ensure agent can process sequential tool calls and return combined result (#845)

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes found.

Possibly related PRs

Suggested labels

Review effort 3/5, codex

Poem

A hop through the code, a loop with no end,
Now tools can be called, again and again!
No more early stopping, just onward we go—
Sequential and nimble, like rabbits in snow.
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9213db6 and c59ef4c.

📒 Files selected for processing (1)
  • src/praisonai-agents/praisonaiagents/llm/openai_client.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/praisonai-agents/praisonaiagents/llm/openai_client.py
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MervinPraison
Copy link
Owner Author

@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again use multi agents to review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes.

@claude
Copy link
Contributor

claude bot commented Jul 12, 2025

Claude encountered an error —— View job


PR Review: Sequential Tool Execution Fix

Todo List:

  • Read and analyze the changes in openai_client.py
  • Review the test file test_sequential_fix.py
  • Compare implementation with the working LiteLLM example
  • Verify backward compatibility
  • Check for any removed features
  • Test the implementation if possible
  • Provide detailed review feedback or implement fixes

Code Review Summary

✅ What the PR does correctly:

  1. Correctly identifies the issue: The PR accurately identifies that sequential tool execution was broken due to premature loop termination after the first tool call.

  2. Appropriate fix: Removing the conditional block that breaks the loop after tool execution allows subsequent tools to be called, which matches the behavior in llm.py.

  3. Preserves iteration logic: The iteration_count is still incremented, and the loop continues properly with the max_iterations limit in place.

  4. Backward compatibility: The change doesn't break any existing functionality - it only enables sequential tool execution that was intended but not working.

⚠️ Issues Found:

  1. Dead code: After the fix, the should_continue variable is still being computed but never used. This creates unnecessary code that could confuse future maintainers.

  2. Missing unit tests: While a test script is provided, there are no unit tests added to the test suite to prevent regression of this issue.

Recommended Improvements:

I'll implement minimal changes to clean up the dead code while preserving the fix:

  1. Remove the unused should_continue logic
  2. Clean up comments to reflect the new behavior

Let me implement these improvements...


Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a crucial fix for the OpenAI client, enabling it to properly handle sequential tool execution. This enhancement allows the model to perform complex, multi-step operations by calling multiple tools in a single interaction, bringing its tool-use behavior in line with other LLM clients and significantly improving its capabilities for advanced agentic workflows.

Highlights

  • Sequential Tool Execution Fix: I've addressed a critical bug in the OpenAI client's tool execution logic that previously prevented sequential tool calls. Before this fix, the system would prematurely terminate after the first tool invocation, limiting the agent's ability to handle multi-step tasks.
  • Code Logic Adjustment: I've removed the break statements that caused premature loop termination in both the synchronous (chat_completion_with_tools) and asynchronous (achat_completion_with_tools) functions within the OpenAI client. This change ensures that the model can continue processing and make multiple, chained tool calls as needed.
  • New Test Case: I've added a new dedicated test file, test_sequential_fix.py, to specifically validate and demonstrate the correct sequential execution of multiple tools (e.g., get_stock_price followed by multiply) when using the OpenAI client.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes the sequential tool execution issue in the OpenAI client. The changes are logical and align with the intended behavior. I've added suggestions to improve maintainability and testing.

Comment on lines 39 to 43
result = agent.start("what is the stock price of Google? multiply the Google stock price with 2")
print("\n" + "=" * 60)
print("FINAL RESULT:")
print(result)
print("=" * 60)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding assert statements to verify the final result, instead of relying on manual inspection of the printed output. This would make it a proper test case.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
test_sequential_fix.py (2)

22-27: Enhance the docstring for consistency.

The multiply function should have a complete docstring like get_stock_price.

 def multiply(a: int, b: int) -> int:
     """
     Multiply two numbers
+
+    Args:
+        a (int): First number
+        b (int): Second number
+        
+    Returns:
+        int: The product of a and b
     """

39-44: Consider adding assertions to verify expected behavior.

While print statements are useful for debugging, adding assertions would make the test more robust and suitable for automated testing.

 result = agent.start("what is the stock price of Google? multiply the Google stock price with 2")
 print("\n" + "=" * 60)
 print("FINAL RESULT:")
 print(result)
 print("=" * 60)
+
+# Verify the result contains expected information
+assert "100" in result, "Result should contain the stock price"
+assert "200" in result, "Result should contain the multiplication result"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ddae919 and d7fbca5.

📒 Files selected for processing (2)
  • src/praisonai-agents/praisonaiagents/llm/openai_client.py (2 hunks)
  • test_sequential_fix.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
test_sequential_fix.py (3)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/tools/test.ts : The 'src/tools/test.ts' file should provide a script for running each tool's internal test or example.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/tools/test.ts : The 'src/tools/test.ts' file should serve as a script for running internal tests or examples for each tool.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Run individual test files as scripts (e.g., `python tests/basic-agents.py`) rather than using a formal test runner.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Cursor BugBot
  • GitHub Check: test-core (3.11)
  • GitHub Check: quick-test
  • GitHub Check: Run tests and collect coverage
  • GitHub Check: quick-test
🔇 Additional comments (2)
src/praisonai-agents/praisonaiagents/llm/openai_client.py (2)

909-909: Fix enables proper sequential tool execution.

The removal of premature loop termination allows the model to see tool results and invoke additional tools as needed, which correctly addresses issue #845.


1064-1064: Async implementation correctly mirrors the sync fix.

The async method maintains consistency with the synchronous version, ensuring sequential tool execution works properly in both contexts.

@MervinPraison
Copy link
Owner Author

@claude fix above issue

@claude
Copy link
Contributor

claude bot commented Jul 12, 2025

Claude encountered an error —— View job


I'll analyze this and get back to you.

@claude
Copy link
Contributor

claude bot commented Jul 12, 2025

Claude finished @MervinPraison's task —— View job


PR Review: Sequential Tool Execution Fix

Completed Tasks:

  • Read and analyze the changes in openai_client.py
  • Review the test file test_sequential_fix.py
  • Compare implementation with the working LiteLLM example
  • Verify backward compatibility
  • Check for any removed features
  • Test the implementation if possible
  • Provide detailed review feedback and implement fixes
  • Use multi-agent review for comprehensive analysis

Summary

I've completed a thorough review of this PR and made improvements to clean up dead code. Here's what was done:

1. Original Fix Analysis

The PR correctly fixes the sequential tool execution issue by removing code that was breaking out of the loop prematurely. This allows the model to see tool results and make additional tool calls as needed.

2. Code Improvements Made 🔧

  • Removed dead code: Eliminated the unused should_continue variable and related logic (~30 lines)
  • Added clear comments: Explained the sequential execution behavior
  • Simplified the implementation: Removed special-case handling for "sequentialthinking" tool

3. Verification

  • The fix aligns with the implementation in llm.py
  • Backward compatibility is maintained
  • No features are removed, only the intended sequential behavior is enabled
  • The existing test file openai-sequential.py demonstrates the fix works correctly

4. Multi-Agent Review Results 🔍

  • Code Quality Review: Confirms the fix is clean, focused, and improves maintainability
  • Security Review: Identified some pre-existing considerations (not introduced by this PR) that could be addressed in future improvements

Final Assessment

The PR successfully fixes issue #845 with minimal, targeted changes. After removing the dead code, the implementation is cleaner and more maintainable. The fix enables sequential tool execution as intended, matching the behavior of other LLM clients in the codebase.

Status: ✅ Ready to merge


@claude
Copy link
Contributor

claude bot commented Jul 12, 2025

Claude finished @MervinPraison's task —— View job


Todo List:

  • Read test_sequential_fix.py to understand current implementation
  • Create test_sequential_fix.py with proper assertions
  • Improve docstrings for consistency
  • Commit changes to the PR branch

Summary

I've successfully addressed the code review feedback by:

  1. Added proper assertions to test_sequential_fix.py:

    • assert "100" in result - Verifies the stock price is included in the result
    • assert "200" in result - Verifies the multiplication result is included
  2. Improved the multiply function docstring for consistency:

    • Added complete Args section documenting both parameters
    • Added Returns section documenting the return value
  3. Added a success message that prints when all assertions pass, making it clear when the test runs successfully.

The test file now properly validates that sequential tool execution works correctly, addressing the feedback from @gemini-code-assist about having assertions instead of relying on manual inspection of printed output.

- Add proper assertions to verify sequential tool execution results
- Improve multiply function docstring with complete Args and Returns sections
- Add success message when all assertions pass

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
@MervinPraison
Copy link
Owner Author

@claude review if the changes made are as per the requirements

@claude
Copy link
Contributor

claude bot commented Jul 12, 2025

Claude finished @MervinPraison's task —— View job


✅ PR Review: Changes Meet Requirements

I've reviewed the changes and can confirm they properly address all the requirements:

Core Fix ✓

The sequential tool execution issue has been correctly fixed by removing the premature loop termination in both chat_completion_with_tools (line 909) and achat_completion_with_tools (line 1064). The fix allows the loop to continue after tool execution, enabling multiple sequential tool calls as intended.

Backward Compatibility ✓

No existing features were removed. The changes only fix the bug that was preventing sequential tool execution. All existing functionality remains intact.

Test Implementation ✓

The test file test_sequential_fix.py has been properly updated with:

  • Complete docstrings for the multiply function (addressing @gemini-code-assist's feedback)
  • Assertions to verify both tool results are present in the final output
  • Clear success message when tests pass
  • Proper demonstration of sequential tool execution

Working Implementation ✓

The test demonstrates the expected behavior:

  1. First tool call: get_stock_price("Google") returns "100"
  2. Second tool call: multiply(100, 2) returns "200"
  3. Final response includes both results

Minor Observation

The should_continue variable (lines 894-907, 1049-1062) is still computed but no longer used after the fix. This creates some dead code but doesn't affect functionality. This could be cleaned up in a future refactor if desired.

Conclusion

The PR successfully fixes issue #845 with minimal, focused changes that enable sequential tool execution while maintaining full backward compatibility.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
test_sequential_fix.py (1)

9-21: Consider making the hardcoded value more explicit.

The function correctly simulates a stock price lookup with proper documentation and debug output. However, the hardcoded return value of "100" could be made more explicit.

Consider this small improvement for clarity:

def get_stock_price(company_name: str) -> str:
    """
    Get the stock price of a company

    Args:
        company_name (str): The name of the company
        
    Returns:
        str: The stock price of the company
    """
    print(f"[Tool Called] get_stock_price with company_name='{company_name}'")
-    return f"The stock price of {company_name} is 100"
+    # Hardcoded for testing - simulates stock price lookup
+    return f"The stock price of {company_name} is 100"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d7fbca5 and 9213db6.

📒 Files selected for processing (1)
  • test_sequential_fix.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
test_sequential_fix.py (3)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/tools/test.ts : The 'src/tools/test.ts' file should provide a script for running each tool's internal test or example.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/tools/test.ts : The 'src/tools/test.ts' file should serve as a script for running internal tests or examples for each tool.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Run individual test files as scripts (e.g., `python tests/basic-agents.py`) rather than using a formal test runner.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: quick-test
  • GitHub Check: test-core (3.11)
  • GitHub Check: Run tests and collect coverage
  • GitHub Check: quick-test
🔇 Additional comments (4)
test_sequential_fix.py (4)

1-8: LGTM! Well-documented test purpose.

The file header clearly identifies the test purpose and references the specific issue being addressed. The import statement is appropriate for the testing framework.


22-35: LGTM! Clean implementation of multiplication tool.

The multiply function is well-documented with proper type hints and includes helpful debug output for tracking tool execution.


46-46: Well-crafted test prompt for sequential execution.

The prompt effectively requires both tools to be called in sequence - first to get the stock price, then to multiply it. This directly tests the fix for issue #845.


58-61: Excellent documentation of expected behavior.

The comments clearly explain the sequential execution flow that should occur, making it easy to understand what the test is validating.

Comment on lines 52 to 54
# Verify the result contains expected information
assert "100" in result, "Result should contain the stock price (100)"
assert "200" in result, "Result should contain the multiplication result (200)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance assertion robustness for better test reliability.

While the string-based assertions address the previous review feedback, they may be brittle if the LLM formats responses differently. The test should be more robust in verifying actual sequential execution.

Consider these improvements for more reliable testing:

# Verify the result contains expected information
-assert "100" in result, "Result should contain the stock price (100)"
-assert "200" in result, "Result should contain the multiplication result (200)"
+import re

+# More robust assertions that account for different formatting
+stock_price_pattern = r'100'
+multiplication_pattern = r'200'
+
+assert re.search(stock_price_pattern, result), f"Result should contain the stock price (100). Got: {result}"
+assert re.search(multiplication_pattern, result), f"Result should contain the multiplication result (200). Got: {result}"
+
+# Additional verification: check that both tools were actually called
+# This could be enhanced by capturing tool execution logs
+print(f"Result content for verification: {repr(result)}")

Additionally, consider capturing and verifying the actual tool call sequence rather than just checking if the results appear in the final output.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Verify the result contains expected information
assert "100" in result, "Result should contain the stock price (100)"
assert "200" in result, "Result should contain the multiplication result (200)"
# Verify the result contains expected information
import re
# More robust assertions that account for different formatting
stock_price_pattern = r'100'
multiplication_pattern = r'200'
assert re.search(stock_price_pattern, result), f"Result should contain the stock price (100). Got: {result}"
assert re.search(multiplication_pattern, result), f"Result should contain the multiplication result (200). Got: {result}"
# Additional verification: check that both tools were actually called
# This could be enhanced by capturing tool execution logs
print(f"Result content for verification: {repr(result)}")
🤖 Prompt for AI Agents
In test_sequential_fix.py around lines 52 to 54, the current assertions check
for specific strings in the result, which can be brittle if the output format
changes. To improve robustness, modify the test to capture the actual sequence
of tool calls or operations performed during execution and assert that these
calls occur in the expected order. This can be done by mocking or instrumenting
the tool invocation methods to record their call sequence, then verifying this
sequence explicitly rather than relying solely on string presence in the final
output.

Comment on lines 40 to 44
agent = Agent(
instructions="You are a helpful assistant. You can use the tools provided to you to help the user.",
llm="gpt-4o",
tools=[get_stock_price, multiply]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify model availability and version.

The Agent configuration looks correct, but ensure the "gpt-4o" model identifier is valid and available.


🌐 Web query:

What is the correct model identifier for GPT-4o in OpenAI's API? Is "gpt-4o" the standard identifier?

💡 Result:

In OpenAI's API, the standard model identifier for GPT-4o is gpt-4o. This alias points to the latest version of GPT-4o, which is gpt-4o-2024-08-06. While gpt-4o is convenient for general use, it's advisable to use the dated version identifier, such as gpt-4o-2024-08-06, in production applications. This practice ensures consistency, as the alias may point to different snapshots over time. (blog.liveleads.co)

Citations:


Specify the GPT-4o model version

While the gpt-4o alias is valid, it’s best practice to pin to a dated snapshot in production to avoid unintentional model updates.
• File: test_sequential_fix.py (lines 40–44)

agent = Agent(
    instructions="You are a helpful assistant. You can use the tools provided to you to help the user.",
-     llm="gpt-4o",
+     llm="gpt-4o-2024-08-06",
    tools=[get_stock_price, multiply]
)

Use gpt-4o-2024-08-06 (or the latest dated identifier) instead of the alias.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
agent = Agent(
instructions="You are a helpful assistant. You can use the tools provided to you to help the user.",
llm="gpt-4o",
tools=[get_stock_price, multiply]
)
agent = Agent(
instructions="You are a helpful assistant. You can use the tools provided to you to help the user.",
llm="gpt-4o-2024-08-06",
tools=[get_stock_price, multiply]
)
🤖 Prompt for AI Agents
In test_sequential_fix.py around lines 40 to 44, replace the llm model alias
"gpt-4o" with the specific dated version "gpt-4o-2024-08-06" to ensure
consistent and stable model behavior in production environments.

MervinPraison and others added 2 commits July 12, 2025 23:15
- Remove unused should_continue variable and related logic
- Simplify code by removing special handling for sequentialthinking tool
- Add clear comments explaining the sequential execution behavior

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
@MervinPraison MervinPraison merged commit 55d4148 into main Jul 12, 2025
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

sequential for openai client

2 participants