Skip to content

Conversation

@MervinPraison
Copy link
Owner

@MervinPraison MervinPraison commented Jul 5, 2025

User description

Add minimal latency tracking solution for MCP servers as a custom tool without modifying any core files.

Summary

  • Created standalone latency tracking tool (no core modifications)
  • Multiple integration methods: tool, decorator, context manager, wrapper
  • Comprehensive examples for different use cases
  • Tracks planning, tool usage, and LLM generation phases
  • Thread-safe implementation supporting concurrent requests

Closes #733

Generated with Claude Code


PR Type

Enhancement


Description

  • Add standalone latency tracking tool for MCP servers

  • Provide multiple integration methods (tool, decorator, context manager, wrapper)

  • Track planning, tool usage, and LLM generation phases

  • Include comprehensive examples and documentation


Changes diagram

flowchart LR
  A["MCP Request"] --> B["Latency Tracker"]
  B --> C["Planning Phase"]
  B --> D["Tool Usage Phase"]
  B --> E["LLM Generation Phase"]
  C --> F["Metrics Collection"]
  D --> F
  E --> F
  F --> G["Performance Report"]
Loading

Changes walkthrough 📝

Relevant files
Enhancement
latency_tracker_tool.py
Core latency tracking implementation                                         

examples/python/custom_tools/latency_tracker_tool.py

  • Implement LatencyTracker class with thread-safe timing operations
  • Add context manager, decorator, and wrapper functionality
  • Provide custom tool function for manual tracking
  • Include convenience functions for external use
  • +210/-0 
    mcp_server_latency_example.py
    MCP server integration example                                                     

    examples/python/custom_tools/mcp_server_latency_example.py

  • Create LatencyTrackedMCPServer class extending HostedMCPServer
  • Implement tracked agent with planning and tool usage phases
  • Add utility functions for MCP server monitoring
  • Provide formatted latency summary reporting
  • +173/-0 
    tools_with_latency.py
    Tools with integrated tracking                                                     

    examples/python/custom_tools/tools_with_latency.py

  • Create example tools with built-in latency tracking
  • Show manual tracking, decorator, and context manager approaches
  • Add latency reporting and data clearing tools
  • Demonstrate fine-grained analysis with timing breakdown
  • +150/-0 
    Documentation
    example_latency_tracking.py
    Comprehensive usage examples                                                         

    examples/python/custom_tools/example_latency_tracking.py

  • Demonstrate tool usage with direct agent integration
  • Show wrapper classes for automatic tracking
  • Provide context manager examples for fine-grained control
  • Include MCP server and concurrent request examples
  • +195/-0 
    minimal_latency_example.py
    Minimal implementation example                                                     

    examples/python/custom_tools/minimal_latency_example.py

  • Show simplest implementation for MCP request handling
  • Demonstrate manual timer start/end for each phase
  • Include percentage breakdown of execution time
  • Provide multi-request tracking example
  • +82/-0   
    README_LATENCY_TRACKING.md
    Complete documentation and usage guide                                     

    examples/python/custom_tools/README_LATENCY_TRACKING.md

  • Provide comprehensive documentation for all integration methods
  • Include API reference with metrics format specification
  • Show MCP server integration patterns and best practices
  • Add tips for monitoring system integration
  • +226/-0 

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features

      • Introduced a minimal latency tracking tool for PraisonAI MCP servers, enabling measurement of planning, tool usage, and LLM generation phases without modifying core files.
      • Added context manager, decorator, and wrapper class options for flexible integration of latency tracking.
      • Provided example scripts demonstrating manual, automatic, and concurrent latency tracking approaches.
      • Included tools and utilities for generating latency reports and clearing tracking data.
      • Supplied comprehensive documentation with integration instructions, API reference, and export options for external monitoring systems.
    • Documentation

      • Added a detailed README explaining latency tracking setup, usage patterns, integration methods, and example code snippets.

    - Create standalone latency tracking tool without core file modifications
    - Add multiple integration methods: tool, decorator, context manager, wrapper
    - Provide comprehensive examples for different use cases
    - Track planning, tool usage, and LLM generation phases
    - Thread-safe implementation supporting concurrent requests
    
    Closes #733
    
    Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
    @coderabbitai
    Copy link
    Contributor

    coderabbitai bot commented Jul 5, 2025

    Walkthrough

    A minimal latency tracking solution is introduced for PraisonAI MCP servers, consisting of a new latency tracker tool, Python API, decorators, context managers, and integration examples. Multiple example scripts and a comprehensive README demonstrate manual and automatic latency tracking for planning, tool usage, and LLM answer generation phases, without modifying core files.

    Changes

    File(s) Change Summary
    examples/python/custom_tools/README_LATENCY_TRACKING.md Added comprehensive documentation for latency tracking tool usage, integration, API reference, and export instructions.
    examples/python/custom_tools/latency_tracker_tool.py Introduced latency tracking tool: LatencyTracker class, decorators, context manager, wrappers, and utility functions.
    examples/python/custom_tools/example_latency_tracking.py Added example script demonstrating multiple latency tracking usage patterns with agents and requests.
    examples/python/custom_tools/mcp_server_latency_example.py Added example MCP server with integrated latency tracking, agent wrappers, and summary/reporting utilities.
    examples/python/custom_tools/minimal_latency_example.py Added minimal example script for phase-based MCP request latency tracking using the tool.
    examples/python/custom_tools/tools_with_latency.py Added example tools demonstrating latency tracking via manual, decorator, and context manager methods, plus reporting tools.

    Sequence Diagram(s)

    sequenceDiagram
        participant Client
        participant MCPServer
        participant Agent
        participant LatencyTracker
    
        Client->>MCPServer: Send request
        MCPServer->>LatencyTracker: start_timer("total", request_id)
        MCPServer->>Agent: Plan
        Agent->>LatencyTracker: start_timer("planning", request_id)
        Agent-->>LatencyTracker: end_timer("planning", request_id)
        MCPServer->>Agent: Execute Tool
        Agent->>LatencyTracker: start_timer("tool_usage", request_id)
        Agent-->>LatencyTracker: end_timer("tool_usage", request_id)
        MCPServer->>Agent: LLM Answer Generation
        Agent->>LatencyTracker: start_timer("llm_generation", request_id)
        Agent-->>LatencyTracker: end_timer("llm_generation", request_id)
        MCPServer-->>LatencyTracker: end_timer("total", request_id)
        MCPServer->>LatencyTracker: get_metrics(request_id)
        MCPServer-->>Client: Return response + latency metrics
    
    Loading

    Assessment against linked issues

    Objective Addressed Explanation
    Track and measure latency breakdown for planning, tool usage, and LLM answer generation phases (#733)
    Provide clear start and end points for each phase, usable from agent/server code (#733)
    Example usage and integration for MCP server and agents (#733)
    Summarize and report latency metrics per request and overall (#733)

    Suggested labels

    Review effort 2/5, codex

    Poem

    In burrows deep, the servers run,
    Tracking time for every one—
    Planning, tools, LLM replies,
    Latency watched with rabbit eyes.
    With context, wrappers, metrics clear,
    Our code hops fast—no need to fear!
    🐇⏱️

    ✨ Finishing Touches
    • 📝 Generate Docstrings

    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Summary of Changes

    Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    This pull request introduces a comprehensive, non-intrusive latency tracking solution specifically designed for PraisonAI MCP servers. It allows users to monitor the performance of key operational phases—planning, tool usage, and LLM generation—without modifying core framework files, offering multiple integration methods for flexibility.

    Highlights

    • Non-Intrusive Latency Tracking: Introduced a standalone latency tracking system that integrates as a custom tool, eliminating the need for modifications to PraisonAI's core codebase.
    • Flexible Integration Options: Provides diverse methods for tracking, including direct custom tool calls, Python context managers, function decorators, and wrapper classes for Agent and LLM objects, catering to various use cases.
    • Granular Phase Monitoring: Enables detailed tracking of critical operational phases within an MCP server request, specifically 'planning,' 'tool usage,' and 'LLM generation,' offering insights into performance bottlenecks.
    • Thread-Safe Design: The underlying LatencyTracker class is implemented with thread-safety, ensuring accurate tracking of concurrent requests in multi-threaded environments.
    • Comprehensive Examples and Documentation: Includes a detailed README and multiple example scripts demonstrating how to integrate and utilize the latency tracking solution effectively in different scenarios, including basic usage, MCP server integration, and tracking within custom tools.
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Jul 5, 2025

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    🎫 Ticket compliance analysis 🔶

    733 - Partially compliant

    Compliant requirements:

    • Measure latency breakdown for planning process, tool usage, and LLM answer generation
    • Build solution for local Ollama LLM with MCP server

    Non-compliant requirements:

    • Identify functions that mark start and end of each phase in the code
    • Track interaction between functions in agent.py, llm.py, task.py, mcp.py and main.py

    Requires further human verification:

    • Verify that the tracking solution works correctly with actual MCP server requests
    • Test integration with local Ollama LLM setup
    • Validate that all three phases are properly captured in real usage scenarios

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 No relevant tests
    🔒 Security concerns

    Code injection vulnerability:
    The calculate_with_tracking function in tools_with_latency.py uses eval() to evaluate mathematical expressions, which allows arbitrary code execution. An attacker could pass malicious code as an expression parameter, leading to remote code execution. This should be replaced with a safe mathematical expression parser like ast.literal_eval() or a dedicated math library.

    ⚡ Recommended focus areas for review

    Thread Safety

    The thread-safe implementation uses a single lock for all operations which could become a bottleneck under high concurrency. Consider using more granular locking or lock-free data structures for better performance.

    self._lock = threading.Lock()
    self._active_timers = {}
    Error Handling

    The examples lack proper error handling for JSON parsing and agent operations. Failed operations could leave timers in inconsistent states or cause tracking data corruption.

    metrics = json.loads(metrics_json)
    Security Risk

    The calculate_with_tracking function uses eval() which is a serious security vulnerability that allows arbitrary code execution. This should be replaced with a safe mathematical expression parser.

    result = eval(expression)
    return f"Result: {result}"

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Jul 5, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Handle negative elapsed time values

    Handle potential negative elapsed time values that could occur due to system
    clock adjustments or time synchronization issues. This prevents invalid timing
    data from being stored and ensures data integrity.

    examples/python/custom_tools/latency_tracker_tool.py [31-48]

     def end_timer(self, phase: str, request_id: str = "default") -> float:
         """End timing a phase and return elapsed time."""
         with self._lock:
             key = f"{request_id}_{phase}"
             if key not in self._active_timers:
                 return 0.0
             
             elapsed = time.time() - self._active_timers[key]
             del self._active_timers[key]
             
    +        # Ensure elapsed time is non-negative
    +        if elapsed < 0:
    +            elapsed = 0.0
    +        
             # Store the timing
             if request_id not in self._data:
                 self._data[request_id] = {}
             if phase not in self._data[request_id]:
                 self._data[request_id][phase] = []
             self._data[request_id][phase].append(elapsed)
             
             return elapsed
    • Apply / Chat
    Suggestion importance[1-10]: 8

    __

    Why: This is a valid and important suggestion that addresses a subtle edge case where system clock adjustments could lead to negative elapsed times, ensuring the integrity of the collected metrics.

    Medium
    Security
    Add input validation and sanitization

    Add input validation to prevent potential security issues and improve error
    handling. The action parameter should be validated against a whitelist of
    allowed values, and the phase and request_id parameters should be sanitized to
    prevent injection attacks.

    examples/python/custom_tools/latency_tracker_tool.py [147-189]

     def latency_tracking_tool(action: str, phase: str = "", request_id: str = "default") -> str:
         """
         Manual latency tracking tool for PraisonAI agents.
         
         Args:
             action: One of 'start', 'end', 'metrics', 'summary', 'clear'
             phase: Phase name (for 'start' and 'end' actions)
             request_id: Request identifier for tracking
         
         Returns:
             str: JSON string with results
         """
    +    # Validate action parameter
    +    allowed_actions = {"start", "end", "metrics", "summary", "clear"}
    +    if action not in allowed_actions:
    +        return json.dumps({"error": f"Invalid action. Must be one of: {', '.join(allowed_actions)}"})
    +    
    +    # Sanitize inputs
    +    phase = str(phase).strip()[:100] if phase else ""
    +    request_id = str(request_id).strip()[:100] if request_id else "default"
    +    
         if action == "start":
             if not phase:
                 return json.dumps({"error": "Phase name required for start action"})
             tracker.start_timer(phase, request_id)
             return json.dumps({"status": "started", "phase": phase, "request_id": request_id})
         
         elif action == "end":
             if not phase:
                 return json.dumps({"error": "Phase name required for end action"})
             elapsed = tracker.end_timer(phase, request_id)
             return json.dumps({
                 "status": "ended", 
                 "phase": phase, 
                 "request_id": request_id,
                 "elapsed": elapsed
             })
         
         elif action == "metrics":
             metrics = tracker.get_metrics(request_id)
             return json.dumps({"request_id": request_id, "metrics": metrics})
         
         elif action == "summary":
             summary = tracker.get_summary()
             return json.dumps({"summary": summary})
         
         elif action == "clear":
             tracker.clear(request_id)
             return json.dumps({"status": "cleared", "request_id": request_id})
    -    
    -    else:
    -        return json.dumps({"error": f"Unknown action: {action}"})
    • Apply / Chat
    Suggestion importance[1-10]: 7

    __

    Why: The suggestion correctly identifies a lack of input validation and sanitization, and the proposed changes improve the function's robustness and security against malformed inputs.

    Medium
    • More

    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    The code changes introduce a latency tracking tool for MCP servers. The review focuses on improving thread safety, fixing a bug in data clearing, addressing a critical security vulnerability, and preventing potential runtime errors from division by zero.


    def __init__(self):
    self._data = {}
    self._lock = threading.Lock()
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    critical

    The use of threading.Lock can lead to a deadlock in the get_summary method because get_metrics acquires the same lock again. Since threading.Lock is not re-entrant, this will cause the thread to hang. Use threading.RLock instead.

    Suggested change
    self._lock = threading.Lock()
    self._lock = threading.RLock()

    Comment on lines +51 to +55
    try:
    result = eval(expression)
    return f"Result: {result}"
    except Exception as e:
    return f"Error: {str(e)}"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    critical

    Using eval() on an input string is a major security vulnerability as it can execute arbitrary code. Even for a simple calculator tool in an example, it's important to demonstrate secure coding practices. An attacker could pass a string like __import__('os').system('rm -rf /') which would be executed by the server. Use a dedicated library like numexpr or implement a simple parser using the ast module instead.

        try:
            # SECURITY WARNING: Using eval() on user input is a major security risk.
            # For a real application, use a safe expression evaluation library.
            import re
            if not re.match(r"^[0-9\s()+\-*/.]+$", expression):
                return "Error: Expression contains invalid characters."
            result = eval(expression)
            return f"Result: {result}"
        except Exception as e:
            return f"Error: {str(e)}"

    Comment on lines +86 to +93
    def clear(self, request_id: Optional[str] = None):
    """Clear tracking data."""
    with self._lock:
    if request_id:
    self._data.pop(request_id, None)
    else:
    self._data.clear()
    self._active_timers.clear()
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    The clear method has a bug: when a request_id is provided, it clears all active timers, not just those associated with the specified request_id. This can lead to incorrect timing data for other concurrent requests. Selectively remove only the timers corresponding to the given request_id.

    Suggested change
    def clear(self, request_id: Optional[str] = None):
    """Clear tracking data."""
    with self._lock:
    if request_id:
    self._data.pop(request_id, None)
    else:
    self._data.clear()
    self._active_timers.clear()
    def clear(self, request_id: Optional[str] = None):
    """Clear tracking data."""
    with self._lock:
    if request_id:
    self._data.pop(request_id, None)
    prefix = f"{request_id}_"
    keys_to_remove = [
    key for key in self._active_timers if key.startswith(prefix)
    ]
    for key in keys_to_remove:
    del self._active_timers[key]
    else:
    self._data.clear()
    self._active_timers.clear()

    Comment on lines +215 to +217
    def export_to_cloudwatch():
    metrics = tracker.get_metrics()
    # Send to CloudWatch
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The example for export_to_cloudwatch calls tracker.get_metrics() without a request_id. It would be clearer to either specify a request_id or use get_summary() to show how to export all metrics.

    Suggested change
    def export_to_cloudwatch():
    metrics = tracker.get_metrics()
    # Send to CloudWatch
    def export_to_cloudwatch(request_id: str):
    metrics = tracker.get_metrics(request_id)
    # Send to CloudWatch

    Comment on lines +114 to +115
    percentage = (data['total'] / total_time) * 100
    print(f" - {phase}: {percentage:.1f}%")
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The calculation of percentage can cause a ZeroDivisionError if total_time is 0. Add a check to prevent division by zero.

    Suggested change
    percentage = (data['total'] / total_time) * 100
    print(f" - {phase}: {percentage:.1f}%")
    if total_time > 0:
    percentage = (data['total'] / total_time) * 100
    print(f" - {phase}: {percentage:.1f}%")
    else:
    print(f" - {phase}: 0.0%")

    Comment on lines +80 to +82
    for phase, data in phases.items():
    percentage = (data['total'] / total) * 100
    print(f" {phase}: {data['total']:.3f}s ({percentage:.1f}%)") No newline at end of file
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    The calculation of percentage can cause a ZeroDivisionError if total is 0. Add a check to prevent division by zero.

    Suggested change
    for phase, data in phases.items():
    percentage = (data['total'] / total) * 100
    print(f" {phase}: {data['total']:.3f}s ({percentage:.1f}%)")
    for phase, data in phases.items():
    if total > 0:
    percentage = (data['total'] / total) * 100
    print(f" {phase}: {data['total']:.3f}s ({percentage:.1f}%)")
    else:
    print(f" {phase}: {data['total']:.3f}s (0.0%)")

    Copy link

    @cursor cursor bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Bug: Unsafe Code Execution in Expression Handling

    The calculate_with_tracking function uses eval() directly on the user-provided expression parameter. This creates a critical security vulnerability, allowing arbitrary code execution. It should be replaced with a safe mathematical expression evaluator (e.g., ast.literal_eval()) or a dedicated math parser.

    examples/python/custom_tools/tools_with_latency.py#L51-L52

    try:
    result = eval(expression)

    Fix in CursorFix in Web


    Bug: Unexpected `request_id` in Parent Constructor

    The TrackedAgent and TrackedLLM classes pass the request_id parameter from kwargs to their respective parent constructors (super().__init__()). This can cause a TypeError if the parent classes do not accept a request_id argument. The request_id should be removed from kwargs using pop() before calling super().__init__().

    examples/python/custom_tools/latency_tracker_tool.py#L112-L116

    """Create a wrapper class that tracks agent operations."""
    class TrackedAgent(agent_class):
    def __init__(self, *args, **kwargs):
    super().__init__(*args, **kwargs)
    self._request_id = kwargs.get('request_id', 'default')

    examples/python/custom_tools/latency_tracker_tool.py#L132-L136

    """Create a wrapper class that tracks LLM operations."""
    class TrackedLLM(llm_class):
    def __init__(self, *args, **kwargs):
    super().__init__(*args, **kwargs)
    self._request_id = kwargs.get('request_id', 'default')

    Fix in CursorFix in Web


    Bug: LatencyTracker Clears All Timers Instead of Specific Ones

    The LatencyTracker.clear() method incorrectly clears all active timers (self._active_timers.clear()) even when a specific request_id is provided. This disrupts timing for other concurrent requests, as it should only clear active timers associated with the specified request_id.

    examples/python/custom_tools/latency_tracker_tool.py#L85-L93

    def clear(self, request_id: Optional[str] = None):
    """Clear tracking data."""
    with self._lock:
    if request_id:
    self._data.pop(request_id, None)
    else:
    self._data.clear()
    self._active_timers.clear()

    Fix in CursorFix in Web


    BugBot free trial expires on July 22, 2025
    You have used $0.00 of your $50.00 spend limit so far. Manage your spend limit in the Cursor dashboard.

    Was this report helpful? Give feedback by reacting with 👍 or 👎

    @MervinPraison MervinPraison merged commit 48fae95 into main Jul 5, 2025
    18 of 19 checks passed
    Copy link
    Contributor

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 3

    🧹 Nitpick comments (14)
    examples/python/custom_tools/latency_tracker_tool.py (1)

    159-189: Simplify control flow with early returns.

    The function uses unnecessary elif statements after returns, making it harder to read.

    -    elif action == "end":
    +    if action == "end":
             if not phase:
                 return json.dumps({"error": "Phase name required for end action"})
             elapsed = tracker.end_timer(phase, request_id)
             return json.dumps({
                 "status": "ended", 
                 "phase": phase, 
                 "request_id": request_id,
                 "elapsed": elapsed
             })
         
    -    elif action == "metrics":
    +    if action == "metrics":
             metrics = tracker.get_metrics(request_id)
             return json.dumps({"request_id": request_id, "metrics": metrics})
         
    -    elif action == "summary":
    +    if action == "summary":
             summary = tracker.get_summary()
             return json.dumps({"summary": summary})
         
    -    elif action == "clear":
    +    if action == "clear":
             tracker.clear(request_id)
             return json.dumps({"status": "cleared", "request_id": request_id})
         
    -    else:
    -        return json.dumps({"error": f"Unknown action: {action}"})
    +    return json.dumps({"error": f"Unknown action: {action}"})
    examples/python/custom_tools/minimal_latency_example.py (3)

    13-14: Add required blank line before function definition.

    PEP 8 requires two blank lines before top-level function definitions.

     import time
     
    +
     # Example: Track MCP request handling
     def handle_mcp_request(query: str, request_id: str = "mcp_1"):

    22-36: Consider removing unused variables or adding explanatory comments.

    The agent, plan, and tool_result variables are created but never used. Since this is a simulation, consider either:

    1. Adding comments explaining these are placeholders for real implementations
    2. Removing them to avoid confusion
         # 1. Planning Phase
         tracker.start_timer("planning", request_id)
    -    agent = Agent(
    -        name="Assistant",
    -        role="MCP Handler",
    -        goal="Process requests",
    -        llm="gpt-4o-mini"
    -    )
    -    plan = f"I will search for information about {query}"
    +    # In a real implementation, an agent would be created and used here
    +    # agent = Agent(...)
    +    # plan = agent.create_plan(query)
         time.sleep(0.1)  # Simulate planning time
         planning_time = tracker.end_timer("planning", request_id)
         
         # 2. Tool Usage Phase
         tracker.start_timer("tool_usage", request_id)
    -    # Simulate tool execution
    +    # In a real implementation, tools would be executed here
    +    # tool_result = agent.execute_tool(...)
         time.sleep(0.2)  # Simulate tool execution time
    -    tool_result = f"Found 5 results for {query}"
         tool_time = tracker.end_timer("tool_usage", request_id)

    51-51: Remove unnecessary f-string prefix.

    The string has no placeholders, so the f-prefix is not needed.

    -    print(f"\nLatency Breakdown:")
    +    print("\nLatency Breakdown:")
    examples/python/custom_tools/tools_with_latency.py (3)

    9-9: Remove unused import.

    The List type is imported but never used.

    -from typing import List, Dict
    +from typing import Dict

    12-13: Add required blank line before function definition.

    PEP 8 requires two blank lines before top-level function definitions.

     from latency_tracker_tool import tracker, track_latency
     
    +
     # Example 1: Simple tool with manual tracking
     def search_with_tracking(query: str) -> str:

    146-150: Simplify control flow by removing unnecessary else.

    The else block is unnecessary after a return statement.

         tracker.clear(request_id)
         if request_id:
             return f"Cleared latency data for request: {request_id}"
    -    else:
    -        return "Cleared all latency tracking data"
    +    return "Cleared all latency tracking data"
    examples/python/custom_tools/example_latency_tracking.py (3)

    7-8: Remove unused imports.

    PraisonAIAgents and create_tracked_llm are imported but never used in the examples.

    -from praisonaiagents import Agent, PraisonAIAgents
    +from praisonaiagents import Agent
     from latency_tracker_tool import (
         latency_tracking_tool, 
         create_tracked_agent,
    -    create_tracked_llm,
         tracker,
         get_latency_metrics,
         get_latency_summary
     )

    Also applies to: 11-11


    107-107: Remove unnecessary f-string prefixes.

    These strings have no placeholders, so the f-prefix is not needed.

    -    print(f"Result: {result}")
    -    print(f"Metrics by phase:")
    +    print(f"Result: {result}")
    +    print("Metrics by phase:")

    And on line 147:

    -    print(f"Breakdown:")
    +    print("Breakdown:")

    Also applies to: 147-147


    131-131: Consider using or removing unused variables.

    The variables plan and final_response are assigned but never used. Consider either using them in the output or removing them if they're not needed for the example.

    If you want to show the values in the example output:

             plan = agent.chat("Plan: Search for Python documentation")
    +        print(f"Plan: {plan}")

    And:

             final_response = agent.chat(f"Based on {tool_result}, provide a summary")
    +        print(f"Final response: {final_response}")

    Or remove them if not needed:

    -        plan = agent.chat("Plan: Search for Python documentation")
    +        agent.chat("Plan: Search for Python documentation")

    Also applies to: 140-140

    examples/python/custom_tools/README_LATENCY_TRACKING.md (3)

    8-8: Fix hyphenation in compound noun.

    "Decision-making" should be hyphenated when used as a noun.

    -1. **Planning Process** - Time taken for agent planning and decision making
    +1. **Planning Process** - Time taken for agent planning and decision-making

    147-147: Add missing article.

    Add the article "a" before "summary".

    -- `get_summary()` - Get summary of all requests
    +- `get_summary()` - Get a summary of all requests

    130-130: Complete the code examples.

    The code snippets have placeholder returns that should be completed for clarity.

     def search_tool(query: str) -> str:
         """Search with automatic latency tracking."""
         # Your search logic
    -    return results
    +    results = f"Search results for: {query}"
    +    return results

    And:

     def get_latency_report(request_id: str = "current_request") -> str:
         """Get latency metrics as a tool."""
         metrics = tracker.get_metrics(request_id)
    -    # Format and return report
    -    return formatted_report
    +    if not metrics:
    +        return "No metrics available"
    +    formatted_report = f"Latency Report for {request_id}:\n"
    +    for phase, data in metrics.items():
    +        formatted_report += f"- {phase}: {data['average']:.3f}s avg\n"
    +    return formatted_report

    Also applies to: 136-136

    examples/python/custom_tools/mcp_server_latency_example.py (1)

    8-11: Remove unused imports to improve code cleanliness.

    The imports PraisonAIAgents and json are not used anywhere in the code and should be removed.

    -from praisonaiagents import Agent, PraisonAIAgents
    +from praisonaiagents import Agent
     from praisonaiagents.mcp import HostedMCPServer
     from latency_tracker_tool import tracker, get_latency_metrics
    -import json
    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between db3dcbe and 005c5c7.

    📒 Files selected for processing (6)
    • examples/python/custom_tools/README_LATENCY_TRACKING.md (1 hunks)
    • examples/python/custom_tools/example_latency_tracking.py (1 hunks)
    • examples/python/custom_tools/latency_tracker_tool.py (1 hunks)
    • examples/python/custom_tools/mcp_server_latency_example.py (1 hunks)
    • examples/python/custom_tools/minimal_latency_example.py (1 hunks)
    • examples/python/custom_tools/tools_with_latency.py (1 hunks)
    🧰 Additional context used
    🧠 Learnings (6)
    📓 Common learnings
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/mcp/**/*.py : Implement MCP server and SSE support for distributed execution and real-time communication in `praisonaiagents/mcp/`.
    
    examples/python/custom_tools/minimal_latency_example.py (1)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/mcp/**/*.py : Implement MCP server and SSE support for distributed execution and real-time communication in `praisonaiagents/mcp/`.
    
    examples/python/custom_tools/README_LATENCY_TRACKING.md (3)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-ts/.cursorrules:0-0
    Timestamp: 2025-06-30T10:05:51.843Z
    Learning: Applies to src/praisonai-ts/src/tools/README.md : The 'src/tools/README.md' file should provide documentation for tool developers on how to write tools in JS/TS.
    
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-ts/.windsurfrules:0-0
    Timestamp: 2025-06-30T10:06:44.129Z
    Learning: Applies to src/praisonai-ts/src/tools/README.md : The 'src/tools/README.md' file should provide documentation for tool developers on how to write tools in JS/TS.
    
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/mcp/**/*.py : Implement MCP server and SSE support for distributed execution and real-time communication in `praisonaiagents/mcp/`.
    
    examples/python/custom_tools/mcp_server_latency_example.py (1)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/mcp/**/*.py : Implement MCP server and SSE support for distributed execution and real-time communication in `praisonaiagents/mcp/`.
    
    examples/python/custom_tools/example_latency_tracking.py (3)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/mcp/**/*.py : Implement MCP server and SSE support for distributed execution and real-time communication in `praisonaiagents/mcp/`.
    
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-ts/.windsurfrules:0-0
    Timestamp: 2025-06-30T10:06:44.129Z
    Learning: Applies to src/praisonai-ts/src/tools/test.ts : The 'src/tools/test.ts' file should serve as a script for running internal tests or examples for each tool.
    
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
    
    examples/python/custom_tools/latency_tracker_tool.py (1)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/mcp/**/*.py : Implement MCP server and SSE support for distributed execution and real-time communication in `praisonaiagents/mcp/`.
    
    🧬 Code Graph Analysis (3)
    examples/python/custom_tools/tools_with_latency.py (1)
    examples/python/custom_tools/latency_tracker_tool.py (6)
    • track_latency (100-108)
    • start_timer (25-29)
    • end_timer (31-48)
    • track (51-57)
    • get_metrics (59-76)
    • clear (86-93)
    examples/python/custom_tools/mcp_server_latency_example.py (2)
    examples/python/custom_tools/latency_tracker_tool.py (5)
    • get_latency_metrics (203-205)
    • track (51-57)
    • chat (118-121)
    • execute_tool (123-126)
    • get_summary (78-84)
    src/praisonai-agents/praisonaiagents/main.py (1)
    • json (409-412)
    examples/python/custom_tools/latency_tracker_tool.py (2)
    src/praisonai-agents/praisonaiagents/main.py (1)
    • json (409-412)
    examples/python/custom_tools/mcp_server_latency_example.py (2)
    • chat (46-48)
    • execute_tool (50-52)
    🪛 Flake8 (7.2.0)
    examples/python/custom_tools/minimal_latency_example.py

    [error] 14-14: expected 2 blank lines, found 1

    (E302)


    [error] 22-22: local variable 'agent' is assigned to but never used

    (F841)


    [error] 28-28: local variable 'plan' is assigned to but never used

    (F841)


    [error] 36-36: local variable 'tool_result' is assigned to but never used

    (F841)


    [error] 51-51: f-string is missing placeholders

    (F541)

    examples/python/custom_tools/tools_with_latency.py

    [error] 9-9: 'typing.List' imported but unused

    (F401)


    [error] 13-13: expected 2 blank lines, found 1

    (E302)

    examples/python/custom_tools/mcp_server_latency_example.py

    [error] 8-8: 'praisonaiagents.PraisonAIAgents' imported but unused

    (F401)


    [error] 11-11: 'json' imported but unused

    (F401)

    examples/python/custom_tools/example_latency_tracking.py

    [error] 7-7: 'praisonaiagents.PraisonAIAgents' imported but unused

    (F401)


    [error] 8-8: 'latency_tracker_tool.create_tracked_llm' imported but unused

    (F401)


    [error] 107-107: f-string is missing placeholders

    (F541)


    [error] 131-131: local variable 'plan' is assigned to but never used

    (F841)


    [error] 140-140: local variable 'final_response' is assigned to but never used

    (F841)


    [error] 147-147: f-string is missing placeholders

    (F541)

    🪛 Ruff (0.11.9)
    examples/python/custom_tools/minimal_latency_example.py

    22-22: Local variable agent is assigned to but never used

    Remove assignment to unused variable agent

    (F841)


    28-28: Local variable plan is assigned to but never used

    Remove assignment to unused variable plan

    (F841)


    36-36: Local variable tool_result is assigned to but never used

    Remove assignment to unused variable tool_result

    (F841)


    51-51: f-string without any placeholders

    Remove extraneous f prefix

    (F541)

    examples/python/custom_tools/tools_with_latency.py

    9-9: typing.List imported but unused

    Remove unused import: typing.List

    (F401)

    examples/python/custom_tools/mcp_server_latency_example.py

    8-8: praisonaiagents.PraisonAIAgents imported but unused

    Remove unused import: praisonaiagents.PraisonAIAgents

    (F401)


    11-11: json imported but unused

    Remove unused import: json

    (F401)

    examples/python/custom_tools/example_latency_tracking.py

    7-7: praisonaiagents.PraisonAIAgents imported but unused

    Remove unused import: praisonaiagents.PraisonAIAgents

    (F401)


    11-11: latency_tracker_tool.create_tracked_llm imported but unused

    Remove unused import: latency_tracker_tool.create_tracked_llm

    (F401)


    107-107: f-string without any placeholders

    Remove extraneous f prefix

    (F541)


    131-131: Local variable plan is assigned to but never used

    Remove assignment to unused variable plan

    (F841)


    140-140: Local variable final_response is assigned to but never used

    Remove assignment to unused variable final_response

    (F841)


    147-147: f-string without any placeholders

    Remove extraneous f prefix

    (F541)

    examples/python/custom_tools/latency_tracker_tool.py

    82-82: Loop control variable phases not used within loop body

    Rename unused phases to _phases

    (B007)

    🪛 Pylint (3.3.7)
    examples/python/custom_tools/tools_with_latency.py

    [refactor] 147-150: Unnecessary "else" after "return", remove the "else" and de-indent the code inside it

    (R1705)

    examples/python/custom_tools/mcp_server_latency_example.py

    [error] 20-20: Access to member 'handle_request' before its definition line 21

    (E0203)


    [refactor] 14-14: Too few public methods (0/2)

    (R0903)

    examples/python/custom_tools/latency_tracker_tool.py

    [refactor] 133-133: Too few public methods (1/2)

    (R0903)


    [refactor] 159-189: Unnecessary "elif" after "return", remove the leading "el" from "elif"

    (R1705)


    [refactor] 147-147: Too many return statements (8/6)

    (R0911)

    🪛 LanguageTool
    examples/python/custom_tools/README_LATENCY_TRACKING.md

    [uncategorized] ~8-~8: The noun “decision-making” (= the process of deciding something) is spelled with a hyphen.
    Context: ...s** - Time taken for agent planning and decision making 2. Tool Usage - Time spent executin...

    (DECISION_MAKING)


    [uncategorized] ~147-~147: You might be missing the article “a” here.
    Context: ...s for a request - get_summary() - Get summary of all requests - clear(request_id) -...

    (AI_EN_LECTOR_MISSING_DETERMINER_A)

    ⏰ Context from checks skipped due to timeout of 90000ms (3)
    • GitHub Check: Cursor BugBot
    • GitHub Check: quick-test
    • GitHub Check: test-core (3.11)
    🔇 Additional comments (8)
    examples/python/custom_tools/mcp_server_latency_example.py (8)

    17-21: The method access pattern is correct despite static analysis warning.

    The pylint warning about accessing handle_request before definition is a false positive. The code correctly accesses the inherited handle_request method from the parent class HostedMCPServer and stores a reference to it before overriding it. This is a valid pattern for method wrapping.


    23-37: Well-implemented request tracking with optional metrics inclusion.

    The tracked request handler properly wraps the original handler with latency tracking and provides optional metrics inclusion based on the request parameter. The implementation correctly extracts the request ID and uses the tracker context manager.


    43-52: Excellent use of method override pattern for granular tracking.

    The TrackedOperationAgent class properly overrides the chat and execute_tool methods to track planning and tool usage phases respectively. This approach allows for granular latency tracking without modifying the core Agent class.


    78-95: Clear demonstration of the three-phase tracking mentioned in PR objectives.

    The example effectively demonstrates tracking for planning, tool usage, and LLM generation phases as specified in the PR objectives. The simulated tool execution with time.sleep(0.1) is appropriate for an example.


    101-115: Comprehensive metrics reporting with percentage breakdown.

    The metrics display logic provides valuable insights including total time, count, average, and percentage breakdown per phase. This aligns well with the goal of providing detailed latency analysis for MCP server operations.


    119-137: Flexible utility function for retrofitting existing servers.

    The add_tracking_to_mcp_server function provides a clean way to add latency tracking to existing MCP server instances without requiring inheritance. The implementation correctly preserves the original handler and adds both total request tracking and per-method tracking.


    141-164: Useful summary function with proper filtering and formatting.

    The get_mcp_latency_summary function provides a well-formatted report of MCP-specific requests (filtering by mcp_ prefix). The implementation handles the empty case gracefully and provides clear, readable output.


    167-173: Appropriate main execution block for example demonstration.

    The main block properly demonstrates the functionality and provides a complete example with summary output. This makes the file both educational and functional as a standalone example.

    Comment on lines +78 to +85
    def get_summary(self) -> Dict[str, Any]:
    """Get summary of all tracked requests."""
    with self._lock:
    summary = {}
    for request_id, phases in self._data.items():
    summary[request_id] = self.get_metrics(request_id)
    return summary

    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Fix unused loop control variable.

    The loop variable phases is not used within the loop body.

     def get_summary(self) -> Dict[str, Any]:
         """Get summary of all tracked requests."""
         with self._lock:
             summary = {}
    -        for request_id, phases in self._data.items():
    +        for request_id in self._data:
                 summary[request_id] = self.get_metrics(request_id)
             return summary
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    def get_summary(self) -> Dict[str, Any]:
    """Get summary of all tracked requests."""
    with self._lock:
    summary = {}
    for request_id, phases in self._data.items():
    summary[request_id] = self.get_metrics(request_id)
    return summary
    def get_summary(self) -> Dict[str, Any]:
    """Get summary of all tracked requests."""
    with self._lock:
    summary = {}
    for request_id in self._data:
    summary[request_id] = self.get_metrics(request_id)
    return summary
    🧰 Tools
    🪛 Ruff (0.11.9)

    82-82: Loop control variable phases not used within loop body

    Rename unused phases to _phases

    (B007)

    🤖 Prompt for AI Agents
    In examples/python/custom_tools/latency_tracker_tool.py around lines 78 to 85,
    the loop variable 'phases' is declared but never used inside the for loop.
    Remove the unused 'phases' variable from the loop header to clean up the code,
    changing the loop to iterate only over the keys of self._data.
    

    Comment on lines +86 to +93
    def clear(self, request_id: Optional[str] = None):
    """Clear tracking data."""
    with self._lock:
    if request_id:
    self._data.pop(request_id, None)
    else:
    self._data.clear()
    self._active_timers.clear()
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Fix bug in clear method that affects all active timers.

    The method clears all active timers even when clearing data for a specific request, which could disrupt ongoing timing for other requests.

     def clear(self, request_id: Optional[str] = None):
         """Clear tracking data."""
         with self._lock:
             if request_id:
                 self._data.pop(request_id, None)
    +            # Only clear active timers for this request
    +            keys_to_remove = [key for key in self._active_timers if key.startswith(f"{request_id}_")]
    +            for key in keys_to_remove:
    +                del self._active_timers[key]
             else:
                 self._data.clear()
                 self._active_timers.clear()

    Committable suggestion skipped: line range outside the PR's diff.

    🤖 Prompt for AI Agents
    In examples/python/custom_tools/latency_tracker_tool.py around lines 86 to 93,
    the clear method clears all active timers regardless of whether a specific
    request_id is provided, which disrupts timers for other requests. Modify the
    method so that when a request_id is given, only the active timers related to
    that request_id are cleared, and when no request_id is provided, clear all
    active timers as before.
    



    # Example 3: Tool with context manager tracking
    def analyze_with_tracking(data: str) -> Dict[str, any]:
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Fix incorrect type hint.

    Use Any (capitalized) from the typing module instead of lowercase any.

    -def analyze_with_tracking(data: str) -> Dict[str, any]:
    +def analyze_with_tracking(data: str) -> Dict[str, Any]:

    You'll also need to import Any:

    -from typing import Dict
    +from typing import Dict, Any
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    def analyze_with_tracking(data: str) -> Dict[str, any]:
    from typing import Dict, Any
    def analyze_with_tracking(data: str) -> Dict[str, Any]:
    # …rest of the function body…
    🤖 Prompt for AI Agents
    In examples/python/custom_tools/tools_with_latency.py at line 59, the type hint
    uses lowercase 'any' which is incorrect. Replace 'any' with 'Any' from the
    typing module and add an import statement for 'Any' at the top of the file to
    fix the type hint.
    

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    MCP Server Latency Test

    2 participants