Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(analytics): filter out bot events in PR performance calculations #644

Closed
wants to merge 3 commits into from

Conversation

Ayyanaruto
Copy link
Contributor

@Ayyanaruto Ayyanaruto commented Apr 12, 2025

This pull request enhances the get_pr_performance method by filtering out bot events from the pull request events and adds corresponding tests to ensure the functionality works as expected. The most important changes include modifying the get_pr_performance method and adding several test cases to verify the new behavior.

Enhancements to get_pr_performance method:

Addition of new test cases:

Summary by CodeRabbit

  • Bug Fixes

    • Improved pull request review performance calculations by excluding automated bot events, ensuring that metrics accurately reflect human contributions.
  • Tests

    • Added new tests to verify that only human interactions are used in the performance metrics, enhancing the reliability of pull request analytics.
    • Introduced tests to confirm the exclusion of bot events and their impact on review time calculations.

Copy link

coderabbitai bot commented Apr 12, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This pull request updates the get_pr_performance method in the analytics service to filter out bot-generated events before processing review metrics. The updated method creates a new list excluding events where the actor’s username ends with "[bot]" or the user type is "Bot", sorts this list by creation time, and then determines the first review from the filtered results. Additionally, new tests are added to verify the correct handling of bot events under various scenarios.

Changes

File Change Summary
backend/analytics_server/.../etl_code_analytics.py Updated get_pr_performance: filters out bot events (username ending with "[bot]" or user type "Bot"), sorts the filtered events, and adjusts first review logic.
backend/analytics_server/.../test_etl_code_analytics.py Added tests to ensure bot events are filtered out correctly. Test cases cover scenarios with human events only, mixed events, only bot events, and counting bot blocking reviews.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant AnalyticsService as get_pr_performance
    participant FilterLogic

    Client->>AnalyticsService: Call get_pr_performance(pr_events)
    AnalyticsService->>FilterLogic: Filter out events with username ending in "[bot]" or user type "Bot"
    FilterLogic-->>AnalyticsService: Return pr_events_no_bots
    AnalyticsService->>AnalyticsService: Sort pr_events_no_bots by created_at
    AnalyticsService->>AnalyticsService: Determine first_review from sorted list
    AnalyticsService-->>Client: Return updated performance metrics
Loading

Assessment against linked issues

Objective Addressed Explanation
Identify Reviews by AI/Bots (635)
Exclude Bot/AI reviews in Pull Request Cycle Time metrics (635)

Poem

Hop along the code, I sing,
Filtering bots off every string.
Sorted events now make the day,
Human reviews come out to play.
A rabbit’s rhythm in code we bring.
🐇💻


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (1)
backend/analytics_server/mhq/service/code/sync/etl_code_analytics.py (1)

69-116: Consider documenting the design intention regarding bot events.

While bot events are filtered out for determining the first review, they are still included in the counts for blocking and approving reviews. This appears to be intentional, but it would be helpful to add a comment explaining this design decision.

 pr_events_no_bots.sort(key=lambda x: x.created_at)
 first_review = pr_events_no_bots[0] if pr_events_no_bots else None
+# Bot events are filtered out for determining first review time,
+# but still counted in blocking/approving review metrics
 approved_reviews = list(
     filter(
         lambda x: x.data["state"] == PullRequestEventState.APPROVED.value,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0eaf63 and 98517d6.

📒 Files selected for processing (2)
  • backend/analytics_server/mhq/service/code/sync/etl_code_analytics.py (1 hunks)
  • backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py (4)
backend/analytics_server/mhq/service/code/sync/etl_code_analytics.py (2)
  • CodeETLAnalyticsService (15-186)
  • get_pr_performance (55-117)
backend/analytics_server/mhq/utils/time.py (1)
  • time_now (10-11)
backend/analytics_server/tests/factories/models/code.py (1)
  • get_pull_request_event (79-107)
backend/analytics_server/mhq/store/models/code/enums.py (1)
  • PullRequestEventState (24-27)
🪛 GitHub Actions: Unit Tests
backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py

[error] 211-211: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'


[error] 231-231: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'


[error] 253-253: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'


[error] 274-274: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: All file linting
🔇 Additional comments (5)
backend/analytics_server/mhq/service/code/sync/etl_code_analytics.py (1)

56-68: Implementation looks good for filtering bot events.

The implementation effectively filters out bot events from PR performance calculations by:

  1. Excluding events where the actor's username ends with "[bot]"
  2. Excluding events where the user type is "Bot"
  3. Only using non-bot events for first review time calculation

This approach ensures more accurate metrics focused on human interactions while still counting bot blocking reviews appropriately.

backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py (4)

204-221: Test logic is correct and appropriately verifies the filtering functionality.

This test correctly verifies that bot events are filtered out when calculating the first review time. Once the parameter issue is fixed, this test will ensure that the PR performance calculation only considers human events for first review time.

🧰 Tools
🪛 GitHub Actions: Unit Tests

[error] 211-211: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'


224-245: Test appropriately checks bot filtering based on user data.

This test properly verifies that events are filtered out based on the user type being "Bot" in the event data. This covers the second condition in the bot filtering logic.

🧰 Tools
🪛 GitHub Actions: Unit Tests

[error] 231-231: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'


247-266: Test correctly handles edge case with only bot events.

This test addresses an important edge case where only bot events are present, ensuring that the function returns -1 for first review time when no human reviews are available.

🧰 Tools
🪛 GitHub Actions: Unit Tests

[error] 253-253: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'


268-285: Test correctly verifies that bot blocking reviews are still counted.

This test confirms that while bot events are filtered out for first review time, they are still included in the count of blocking reviews. This is an important distinction in the implementation.

🧰 Tools
🪛 GitHub Actions: Unit Tests

[error] 274-274: TypeError: get_pull_request_event() got an unexpected keyword argument 'actor_username'

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 98517d6 and e8b2f6b.

📒 Files selected for processing (1)
  • backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py (4)
backend/analytics_server/mhq/service/code/sync/etl_code_analytics.py (2)
  • CodeETLAnalyticsService (15-186)
  • get_pr_performance (55-117)
backend/analytics_server/mhq/utils/time.py (1)
  • time_now (10-11)
backend/analytics_server/tests/factories/models/code.py (1)
  • get_pull_request_event (79-107)
backend/analytics_server/mhq/store/models/code/enums.py (1)
  • PullRequestEventState (24-27)
🪛 GitHub Actions: Unit Tests
backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py

[error] 242-242: KeyError: 'state' in test_pr_performance_filters_out_bot_events_with_user_data


[error] 263-263: KeyError: 'state' in test_pr_performance_with_only_bot_events_returns_no_first_review

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: All file linting
🔇 Additional comments (2)
backend/analytics_server/tests/service/code/sync/test_etl_code_analytics.py (2)

204-222: Looks good! Test verifies bot events are filtered out correctly.

The test properly verifies that events from users with names ending in "[bot]" are excluded from PR performance calculations, ensuring that only human events are considered.


268-285: Looks good! Test verifies bot blocking reviews are counted correctly.

This test correctly verifies that bot events requesting changes are counted as blocking reviews. It ensures that while bot reviews are excluded from first review time calculations, they are still considered when counting blocking reviews.

@Ayyanaruto Ayyanaruto marked this pull request as draft April 12, 2025 07:47
@Ayyanaruto Ayyanaruto closed this Apr 12, 2025
@Ayyanaruto Ayyanaruto deleted the filter-bot branch April 12, 2025 08:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant