-
-
Notifications
You must be signed in to change notification settings - Fork 743
Revert "PR #1002: Changes from Claude" #1006
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThe Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent
participant Telemetry
User->>Agent: chat(prompt, ...)
alt Custom LLM
Agent->>Agent: Process prompt (custom LLM branch)
Agent->>Telemetry: _cleanup_telemetry()
Agent-->>User: Return response
else Non-custom LLM
loop Reflection (if required)
Agent->>Agent: Generate/reflect response
Agent->>Agent: Validate guardrails
alt Guardrail failure or max reflections
Agent->>Telemetry: _cleanup_telemetry()
Agent-->>User: Return failure or final response
end
end
Agent->>Telemetry: _cleanup_telemetry()
Agent-->>User: Return response
end
Note right of Agent: Telemetry cleanup is called on all exit paths
Possibly related PRs
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (5)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes. |
|
Claude finished @MervinPraison's task —— View job • PR #1006 Analysis and Fix ✅Analysis Complete - Fix Implemented and Deployed Todo List:
Problem Analysis: The revert (PR #1006) removed essential telemetry cleanup functionality that was preventing agent termination hangs. The original code had 8 scattered Solution Implemented: ✅ Added try...finally pattern around the entire chat method body Testing Results: ✅ Agent Creation: Works correctly Changes Deployed: Commit: SummaryThe telemetry cleanup fix has been successfully implemented and deployed. The solution addresses the root cause of agent termination hangs by ensuring telemetry cleanup happens exactly once, regardless of how the chat method exits (success, failure, or exception). All existing functionality is preserved with improved reliability. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request fully reverts a previous set of changes (PR #1002 / #1005) that aimed to modify the telemetry cleanup strategy within the agent's chat functionalities. The reversion restores the codebase to its state before those changes were applied, including the removal of specific test files that were part of the original modification.
Highlights
- Revert Telemetry Cleanup Refactor: This pull request reverts a previous attempt to refactor telemetry cleanup within the
chatmethod. It removes a globaltry...finallyblock that was intended for cleanup and re-introduces explicit calls to_cleanup_telemetry()in various success and error handling paths for both custom and non-custom LLM execution. - Revert Asynchronous Cleanup Changes: Changes to the
achat(asynchronous chat) method are undone by removing explicit_cleanup_telemetry()calls that were previously added. This restores the asynchronous method's telemetry cleanup behavior to its state before the reverted pull request. - Removal of Associated Tests: Four test files (
test_refactor_validation.py,test_telemetry_cleanup_fix.py,test_telemetry_cleanup_simple.py,test_thread_cleanup.py) that were introduced to validate the telemetry cleanup and refactoring changes are now deleted as part of this revert.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request reverts a previous change that refactored telemetry cleanup using a try...finally block. While I understand there might be a reason for the revert, this change re-introduces critical bugs and is a step back in terms of code quality and robustness.
The try...finally pattern is the standard and safest way to ensure cleanup code is executed. The new approach of manually adding _cleanup_telemetry() calls at every exit point is error-prone and, as my review shows, has already missed several exit paths in the chat method. This will likely lead to resource leaks (e.g., hanging telemetry threads).
Furthermore, this PR removes several test files (test_telemetry_cleanup_fix.py, test_thread_cleanup.py, etc.) that were specifically designed to catch these kinds of thread leak issues. Removing tests along with a revert is a significant red flag and undermines the stability of the codebase.
I strongly recommend reconsidering this revert. If the original change caused issues, it would be better to fix those issues rather than reverting to a less safe and more buggy implementation. At a minimum, the newly introduced bugs in the chat and achat methods must be fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Achat Method Missing Telemetry Cleanup
The achat method is missing self._cleanup_telemetry() calls on multiple return paths, including successful completions, guardrail failures, and tool/JSON/Pydantic outputs. This omission, caused by a revert that removed previous cleanup mechanisms without replacement, can lead to telemetry resource leaks and hanging processes. This behavior is inconsistent with the chat method, which now includes explicit cleanup on all its return paths.
src/praisonai-agents/praisonaiagents/agent/agent.py#L1648-L1743
PraisonAI/src/praisonai-agents/praisonaiagents/agent/agent.py
Lines 1648 to 1743 in 404bef3
| self._execute_callback_and_display(normalized_content, validated_response, time.time() - start_time, task_name, task_description, task_id) | |
| return validated_response | |
| except Exception as e: | |
| logging.error(f"Agent {self.name}: Guardrail validation failed for custom LLM: {e}") | |
| # Rollback chat history on guardrail failure | |
| self.chat_history = self.chat_history[:chat_history_length] | |
| return None | |
| except Exception as e: | |
| # Rollback chat history if LLM call fails | |
| self.chat_history = self.chat_history[:chat_history_length] | |
| display_error(f"Error in LLM chat: {e}") | |
| if logging.getLogger().getEffectiveLevel() == logging.DEBUG: | |
| total_time = time.time() - start_time | |
| logging.debug(f"Agent.achat failed in {total_time:.2f} seconds: {str(e)}") | |
| # Ensure proper cleanup of telemetry system to prevent hanging | |
| self._cleanup_telemetry() | |
| return None | |
| # For OpenAI client | |
| # Use the new _build_messages helper method | |
| messages, original_prompt = self._build_messages(prompt, temperature, output_json, output_pydantic) | |
| # Store chat history length for potential rollback | |
| chat_history_length = len(self.chat_history) | |
| # Normalize original_prompt for consistent chat history storage | |
| normalized_content = original_prompt | |
| if isinstance(original_prompt, list): | |
| # Extract text from multimodal prompts | |
| normalized_content = next((item["text"] for item in original_prompt if item.get("type") == "text"), "") | |
| # Prevent duplicate messages | |
| if not (self.chat_history and | |
| self.chat_history[-1].get("role") == "user" and | |
| self.chat_history[-1].get("content") == normalized_content): | |
| # Add user message to chat history BEFORE LLM call so handoffs can access it | |
| self.chat_history.append({"role": "user", "content": normalized_content}) | |
| reflection_count = 0 | |
| start_time = time.time() | |
| while True: | |
| try: | |
| if self.verbose: | |
| display_text = prompt | |
| if isinstance(prompt, list): | |
| display_text = next((item["text"] for item in prompt if item["type"] == "text"), "") | |
| if display_text and str(display_text).strip(): | |
| agent_tools = [t.__name__ if hasattr(t, '__name__') else str(t) for t in self.tools] | |
| await adisplay_instruction( | |
| f"Agent {self.name} is processing prompt: {display_text}", | |
| console=self.console, | |
| agent_name=self.name, | |
| agent_role=self.role, | |
| agent_tools=agent_tools | |
| ) | |
| # Use the new _format_tools_for_completion helper method | |
| formatted_tools = self._format_tools_for_completion(tools) | |
| # Check if OpenAI client is available | |
| if self._openai_client is None: | |
| error_msg = "OpenAI client is not initialized. Please provide OPENAI_API_KEY or use a custom LLM provider." | |
| display_error(error_msg) | |
| return None | |
| # Make the API call based on the type of request | |
| if tools: | |
| response = await self._openai_client.async_client.chat.completions.create( | |
| model=self.llm, | |
| messages=messages, | |
| temperature=temperature, | |
| tools=formatted_tools, | |
| ) | |
| result = await self._achat_completion(response, tools) | |
| if logging.getLogger().getEffectiveLevel() == logging.DEBUG: | |
| total_time = time.time() - start_time | |
| logging.debug(f"Agent.achat completed in {total_time:.2f} seconds") | |
| # Execute callback after tool completion | |
| self._execute_callback_and_display(original_prompt, result, time.time() - start_time, task_name, task_description, task_id) | |
| return result | |
| elif output_json or output_pydantic: | |
| response = await self._openai_client.async_client.chat.completions.create( | |
| model=self.llm, | |
| messages=messages, | |
| temperature=temperature, | |
| response_format={"type": "json_object"} | |
| ) | |
| response_text = response.choices[0].message.content | |
| if logging.getLogger().getEffectiveLevel() == logging.DEBUG: | |
| total_time = time.time() - start_time | |
| logging.debug(f"Agent.achat completed in {total_time:.2f} seconds") | |
| # Execute callback after JSON/Pydantic completion | |
| self._execute_callback_and_display(original_prompt, response_text, time.time() - start_time, task_name, task_description, task_id) | |
| return response_text |
src/praisonai-agents/praisonaiagents/agent/agent.py#L1373-L1374
PraisonAI/src/praisonai-agents/praisonaiagents/agent/agent.py
Lines 1373 to 1374 in 404bef3
| self.chat_history = self.chat_history[:chat_history_length] | |
| return None |
Bug: Telemetry Cleanup Omission Causes Resource Leaks
The chat method's try...finally block, which guaranteed telemetry cleanup, was removed. This change introduced multiple early return paths that now exit without calling self._cleanup_telemetry(), including those for successful JSON/Pydantic output, guardrail failures, and various self-reflection scenarios. This omission can lead to resource leaks or hanging processes.
src/praisonai-agents/praisonaiagents/agent/agent.py#L1369-L1393
PraisonAI/src/praisonai-agents/praisonaiagents/agent/agent.py
Lines 1369 to 1393 in 404bef3
| response = self._chat_completion(messages, temperature=temperature, tools=tools if tools else None, reasoning_steps=reasoning_steps, stream=self.stream, task_name=task_name, task_description=task_description, task_id=task_id) | |
| if not response: | |
| # Rollback chat history on response failure | |
| self.chat_history = self.chat_history[:chat_history_length] | |
| return None | |
| response_text = response.choices[0].message.content.strip() | |
| # Handle output_json or output_pydantic if specified | |
| if output_json or output_pydantic: | |
| # Add to chat history and return raw response | |
| # User message already added before LLM call via _build_messages | |
| self.chat_history.append({"role": "assistant", "content": response_text}) | |
| # Apply guardrail validation even for JSON output | |
| try: | |
| validated_response = self._apply_guardrail_with_retry(response_text, original_prompt, temperature, tools, task_name, task_description, task_id) | |
| # Execute callback after validation | |
| self._execute_callback_and_display(original_prompt, validated_response, time.time() - start_time, task_name, task_description, task_id) | |
| return validated_response | |
| except Exception as e: | |
| logging.error(f"Agent {self.name}: Guardrail validation failed for JSON output: {e}") | |
| # Rollback chat history on guardrail failure | |
| self.chat_history = self.chat_history[:chat_history_length] | |
| return None |
BugBot free trial expires on July 22, 2025
Learn more in the Cursor dashboard.
Was this report helpful? Give feedback by reacting with 👍 or 👎
|
@claude review if the changes made in the above branch are as per the requirements and make changes to the above branch if issues still exist. |
|
Claude finished @MervinPraison's task —— View job • Branch Analysis and Requirements Review ✅Todo List:
Critical Issues Found and Fixed:Analysis Complete - Multiple Bugs Identified and Fixed:
Solution Implemented:✅ Comprehensive Fix Applied:
Key Exit Paths Fixed:
Testing Results:
Changes Deployed:Commit: Summary: Job run: https://github.com/MervinPraison/PraisonAI/actions/runs/16376098915 |
- Add missing _cleanup_telemetry() calls to 8+ exit paths in chat method - Add missing _cleanup_telemetry() calls to 6+ exit paths in achat method - Prevents resource leaks and hanging threads during agent termination - Ensures cleanup happens on ALL exit paths (success, failure, exception) - Maintains 100% backward compatibility with existing functionality - Resolves issues identified in PR #1006 review comments 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Reverts #1005
Summary by CodeRabbit