Skip to content

Conversation

@Nachiket-Roy
Copy link
Contributor

@Nachiket-Roy Nachiket-Roy commented Dec 8, 2025

Closes : #3389

Summary

This PR introduces a dedicated GSOC Pull Request Report view, accessible from the projects page.
It provides a historical overview of all merged PRs during each year’s GSOC period (May–September) for the past 10 years, grouped by repository.
Screenshot 2025-12-12 171048
Screenshot 2025-12-12 171105
Screenshot 2025-12-12 171016
Screenshot 2025-12-12 171032

Note : New Chart UI:
Screenshot 2025-12-12 190604

Key Features

  • Generates a year-by-year report of merged PRs from GSOC months only (May 1–Sept 30).
  • Groups results per repository for each year, including:
  • PR count
  • Unique contributors
  • Repository URL and quick-access PR links
  • Computes summary statistics:
  • Total years, repositories, and PRs
  • Average PRs per year
  • Provides dataset exports and raw-data inspection directly in the UI.
  • Adds two interactive charts using ApexCharts:
  • Yearly PR Contributions (bar chart)
  • Top Repositories Across All Years (donut chart)
  • Bot Filtering

Technical Notes

  • Introduces a new view: gsoc_pr_report
  • Adds a new route under: /gsoc/pr-report/
  • Adds a full UI page at : /gsoc_pr_report.html
  • Includes data normalization for chart inputs.

This report allows the organization to:

  • Review GSOC activity over the last decade
  • Understand contribution patterns across repositories
  • Identify high-activity years and top-performing projects
  • Prepare metrics for GSoC submissions, annual reports, and presentations

Summary by CodeRabbit

  • New Features

    • GSOC PR Report dashboard at /gsoc/pr-report: summary cards, interactive charts (yearly bar, top-repos donut), detailed year-by-year views, and no-data handling.
    • Header and sidenav links to access the report.
    • Client-side actions: Refresh, Export Charts (JSON), Download Report (PDF/HTML), View Raw Data.
  • Chores

    • Database migration and model indexing to improve PR query performance.
  • Stability

    • Server-side error handling and logging for report generation.

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 8, 2025

👋 Hi @Nachiket-Roy!

This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:

  • The PR author
  • DonnieBLT
  • coderabbitai
  • copilot

Once a valid peer review is submitted, this check will pass automatically. Thank you!

@github-actions github-actions bot added files-changed: 3 PR changes 3 files needs-peer-review PR needs peer review labels Dec 8, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 8, 2025

Warning

Rate limit exceeded

@DonnieBLT has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 1 minutes and 55 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 93ce158 and c6be2b0.

📒 Files selected for processing (4)
  • blt/urls.py (2 hunks)
  • website/templates/includes/header.html (1 hunks)
  • website/templates/includes/sidenav.html (1 hunks)
  • website/views/project.py (4 hunks)

Walkthrough

Adds a GSOC PR reporting feature: new URL and Django view that aggregates merged PRs by year/repo, a template embedding escaped JSON, client-side JS to parse/render/export charts and reports, navigation links, and DB indexes plus migration to support PR queries.

Changes

Cohort / File(s) Summary
URL Configuration
blt/urls.py
Registers path("gsoc/pr-report/", ...) mapped to the new view name gsoc_pr_report.
View & logging
website/views/project.py
Adds logger = logging.getLogger(__name__), imports defaultdict and GitHubIssue, and a new view gsoc_pr_report(request) that queries merged PRs, aggregates yearly/repo stats and contributors, prepares summary/chart/report JSON, handles errors, and renders the template.
Templates
website/templates/projects/gsoc_pr_report.html, website/templates/includes/header.html, website/templates/includes/sidenav.html
Adds gsoc_pr_report.html embedding escaped/precomputed JSON and loading gsoc_pr_report.js; adds "GSOC PR Reports" links in header and sidenav with active-state logic.
Client-side JS (charts & export)
website/static/js/gsoc_pr_report.js
New script exposing window.gsocData; parsing helpers (parseEscapedJSON, parseDataFromDOM, parseDataDirectly), normalization (getChartData), chart rendering (ApexCharts bar + donut), export features (JSON, PDF via jsPDF+html2canvas, HTML), dynamic loader (loadScript), raw-data viewer, and related utilities.
Database model changes
website/models.py
Adds indexes to GitHubIssue.Meta: composite index on (type, is_merged, merged_at) named githubissue_pr_merged_idx and an index on merged_at named githubissue_merged_date_idx.
Migration
website/migrations/0262_rename_website_soc_user_id_provid_idx_website_soc_user_id_fb0840_idx_and_more.py
New migration renaming an index on socialaccountreward and adding the two new githubissue indexes; includes dependencies.

Sequence Diagram

sequenceDiagram
    actor User
    participant Browser
    participant DjangoView as Django View (gsoc_pr_report)
    participant Database
    participant TemplateEngine as Template Engine
    participant ClientJS as Client-side JS

    User->>Browser: Navigate to /gsoc/pr-report/
    Browser->>DjangoView: HTTP GET /gsoc/pr-report/
    DjangoView->>Database: Query merged PR records (GitHubIssue)
    Database-->>DjangoView: PR rows
    DjangoView->>DjangoView: Aggregate yearly & repo stats, build JSON payload
    DjangoView->>TemplateEngine: Render `gsoc_pr_report.html` (embed escaped JSON)
    TemplateEngine-->>Browser: HTML with embedded data + script tag
    Browser->>ClientJS: Load & execute `gsoc_pr_report.js`
    ClientJS->>ClientJS: Parse embedded JSON into window.gsocData (fallbacks if needed)
    ClientJS->>ClientJS: Normalize data and initialize ApexCharts (bar + donut)
    User-->>Browser: Trigger export/download/view raw actions
    Browser->>ClientJS: Export actions (JSON/PDF/HTML) using dynamic script loading
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Review aggregation/filtering in website/views/project.py for correct merged-PR selection, date boundaries, ORM efficiency, and use of new indexes.
  • Verify migration 0262_... index names, fields, and dependencies match website/models.py.
  • Inspect JSON encoding/escaping in website/templates/projects/gsoc_pr_report.html and parsing fallbacks in website/static/js/gsoc_pr_report.js for XSS and robustness.
  • Test chart initialization/responsiveness and multiple input-shape branches in gsoc_pr_report.js.
  • Validate PDF/HTML export flows, dynamic script loading (SRI/timeout), and error handling in website/static/js/gsoc_pr_report.js.
  • Confirm URL name consistency and active-state logic in header/sidenav templates.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 58.82% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately reflects the main feature being added: a 10-year GSOC PR activity report aggregating data across multiple OWASP repositories, which aligns with all file changes (view, template, JavaScript, URL routing, navigation updates).

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the pre-commit: passed Pre-commit checks passed label Dec 8, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
website/views/project.py (3)

2234-2234: Move the import to the module level.

The defaultdict import should be at the top of the file with other imports rather than inside the function.

Apply this diff:

At the top of the file (around line 1-11), add:

+from collections import defaultdict
 import concurrent.futures
 import ipaddress
 import json

Then remove the import from inside the function:

-    from collections import defaultdict
-
     repo_totals = defaultdict(int)

2256-2269: Remove duplicate context data.

The context contains duplicate JSON-encoded data:

  • yearly_chart_data_json and yearly_contribution_data_json both encode the same yearly_chart_data list (line 2264-2265)
  • Both report_data and gsoc_data_json contain the same information in different formats

This creates unnecessary data duplication and may cause confusion.

Apply this diff to remove the duplicate:

     context = {
         "report_data": report_data,
         "start_year": start_year,
         "end_year": current_year,
         "total_years": total_years,
         "total_repos": total_repos,
         "total_prs": total_prs,
         "summary_data": summary_data,
         "yearly_chart_data_json": json.dumps(yearly_chart_data),
-        "yearly_contribution_data_json": json.dumps(yearly_chart_data),
         "gsoc_data_json": json.dumps(report_data),
         "avg_prs_per_year": avg_prs_per_year,
         "top_repos_chart_data_json": json.dumps(top_repos_chart_data),
     }

Then update the template to use yearly_chart_data_json instead of yearly_contribution_data_json.


2197-2271: Add error handling for database queries and JSON serialization.

The function lacks error handling for database operations and JSON serialization. If queries fail or data cannot be serialized, users will see a generic 500 error instead of a helpful message.

Consider wrapping the main logic in a try-except block:

def gsoc_pr_report(request):
    try:
        current_year = datetime.now().year
        start_year = current_year - 10
        # ... rest of the logic ...
        
        return render(request, "projects/gsoc_pr_report.html", context)
    except GitHubIssue.DoesNotExist:
        messages.error(request, "No PR data available for GSOC analysis.")
        return redirect("project_list")
    except Exception as e:
        logger.error(f"Error generating GSOC PR report: {e}")
        messages.error(request, "An error occurred while generating the report. Please try again later.")
        return redirect("project_list")
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 051884a and a3d625e.

📒 Files selected for processing (3)
  • blt/urls.py (2 hunks)
  • website/templates/projects/gsoc_pr_report.html (1 hunks)
  • website/views/project.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
blt/urls.py (1)
website/views/project.py (1)
  • gsoc_pr_report (2197-2271)
website/views/project.py (1)
website/models.py (1)
  • GitHubIssue (2089-2315)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: auto-assign
  • GitHub Check: add_changes_requested_label
  • GitHub Check: check_peer_review
  • GitHub Check: Run Tests
  • GitHub Check: Run pre-commit
  • GitHub Check: docker-test
🔇 Additional comments (5)
blt/urls.py (2)

307-307: LGTM!

The import is correctly placed and the function exists in the target module.


1244-1244: Verify that public access to the gsoc PR report endpoint is intentional.

This URL pattern does not include authentication decorators. If gsoc_pr_report lacks authentication at the view level and should be restricted to authenticated users, add the @login_required decorator or appropriate permission checks.

website/views/project.py (1)

47-47: LGTM!

The GitHubIssue import is correctly placed and necessary for the new GSOC PR report functionality.

website/templates/projects/gsoc_pr_report.html (2)

338-353: LGTM!

The data parsing logic includes proper error handling with try-catch, which prevents the page from breaking if JSON parsing fails.


384-417: LGTM!

The chart initialization includes proper guards to only render when data exists, preventing JavaScript errors on empty datasets.

@github-project-automation github-project-automation bot moved this from Backlog to Ready in 📌 OWASP BLT Project Board Dec 8, 2025
@github-actions github-actions bot added the tests: passed Django tests passed label Dec 8, 2025
@Nachiket-Roy Nachiket-Roy marked this pull request as draft December 8, 2025 20:20
@github-actions github-actions bot added the changes-requested PR has requested changes from a reviewer label Dec 8, 2025
@github-actions github-actions bot removed the changes-requested PR has requested changes from a reviewer label Dec 8, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 8, 2025

❌ Tests failed

The Django tests found issues that need to be fixed. Please review the test output below and fix the failing tests.

How to run tests locally

# Install dependencies
poetry install --with dev

# Run all tests
poetry run python manage.py test

# Run tests with verbose output
poetry run python manage.py test -v 3

# Run a specific test
poetry run python manage.py test app.tests.TestClass.test_method
Test output (last 100 lines)
Adding content type 'authtoken | tokenproxy'
Adding permission 'Permission object (409)'
Adding permission 'Permission object (410)'
Adding permission 'Permission object (411)'
Adding permission 'Permission object (412)'
Adding permission 'Permission object (413)'
Adding permission 'Permission object (414)'
Adding permission 'Permission object (415)'
Adding permission 'Permission object (416)'
Running post-migrate handlers for application tz_detect
Running post-migrate handlers for application star_ratings
Adding content type 'star_ratings | rating'
Adding content type 'star_ratings | userrating'
Adding permission 'Permission object (417)'
Adding permission 'Permission object (418)'
Adding permission 'Permission object (419)'
Adding permission 'Permission object (420)'
Adding permission 'Permission object (421)'
Adding permission 'Permission object (422)'
Adding permission 'Permission object (423)'
Adding permission 'Permission object (424)'
Running post-migrate handlers for application captcha
Adding content type 'captcha | captchastore'
Adding permission 'Permission object (425)'
Adding permission 'Permission object (426)'
Adding permission 'Permission object (427)'
Adding permission 'Permission object (428)'
Running post-migrate handlers for application dj_rest_auth
Traceback (most recent call last):
  File "/home/runner/work/BLT/BLT/manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
    utility.execute()
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv
    super().run_from_argv(argv)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 416, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 460, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/test.py", line 63, in handle
    failures = test_runner.run_tests(test_labels)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/runner.py", line 1098, in run_tests
    self.run_checks(databases)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/runner.py", line 1020, in run_checks
    call_command("check", verbosity=self.verbosity, databases=databases)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 194, in call_command
    return command.execute(*args, **defaults)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 460, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/check.py", line 81, in handle
    self.check(
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 492, in check
    all_issues = checks.run_checks(
                 ^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/checks/registry.py", line 89, in run_checks
    new_errors = check(app_configs=app_configs, databases=databases)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/checks/urls.py", line 16, in check_url_config
    return check_resolver(resolver)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/checks/urls.py", line 26, in check_resolver
    return check_method()
           ^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 531, in check
    for pattern in self.url_patterns:
                   ^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/utils/functional.py", line 47, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
                                         ^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 718, in url_patterns
    patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
                       ^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/utils/functional.py", line 47, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
                                         ^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 711, in urlconf_module
    return import_module(self.urlconf_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.11.2/x64/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/runner/work/BLT/BLT/blt/urls.py", line 296, in <module>
    from website.views.project import (
  File "/home/runner/work/BLT/BLT/website/views/project.py", line 56, in <module>
    logger = logging.getLogger(__name__)
             ^^^^^^^
NameError: name 'logging' is not defined

For more information, see the Django testing documentation.

@github-actions github-actions bot added tests: failed Django tests failed tests: passed Django tests passed and removed tests: passed Django tests passed tests: failed Django tests failed labels Dec 8, 2025
@Nachiket-Roy Nachiket-Roy marked this pull request as ready for review December 8, 2025 20:59
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
website/views/project.py (2)

2212-2219: Consider filtering out PRs without associated repositories.

The GitHubIssue.repo field is nullable. PRs without an associated repo will result in repo__name and repo__repo_url being None, which could cause issues in the template when rendering links or aggregating by repo name.

Add a filter to exclude issues without repos:

             repos_with_prs = (
                 GitHubIssue.objects.filter(
-                    type="pull_request", is_merged=True, merged_at__gte=start_date, merged_at__lte=end_date
+                    type="pull_request", is_merged=True, merged_at__gte=start_date, merged_at__lte=end_date,
+                    repo__isnull=False
                 )
                 .values("repo__name", "repo__repo_url")
                 .annotate(pr_count=Count("id"), unique_contributors=Count("contributor", distinct=True))
                 .order_by("-pr_count")
             )

2208-2228: Consider single-query optimization if performance becomes a concern.

The current implementation executes one query per year (11 queries total). This is acceptable for a low-traffic report page, but if performance becomes an issue, consider fetching all data in a single query using ExtractYear and grouping in Python.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between a3d625e and 7ae17b6.

📒 Files selected for processing (2)
  • website/templates/projects/gsoc_pr_report.html (1 hunks)
  • website/views/project.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
website/views/project.py (1)
website/models.py (3)
  • GitHubIssue (2089-2315)
  • Repo (1903-1980)
  • UserProfile (896-1109)
🔇 Additional comments (5)
website/views/project.py (2)

4-4: LGTM!

Import additions are appropriate for the new GSOC PR report functionality.

Also applies to: 9-9, 49-49, 57-57


2279-2282: LGTM!

Error handling appropriately logs the error with context, provides a user-friendly message, and redirects to a safe fallback page.

website/templates/projects/gsoc_pr_report.html (3)

7-12: LGTM!

Data container correctly uses escapejs filter on pre-serialized JSON data from the view.


360-393: LGTM!

Chart initialization correctly checks for data existence before rendering and uses appropriate ApexCharts configurations.


396-411: LGTM!

Export function correctly creates a downloadable JSON file with timestamp in filename.

@Nachiket-Roy
Copy link
Contributor Author

@coderabbitai Other than the current fixes, could you please suggest what else remains to be done in this PR (charts, filters, pagination, etc.)?

@Nachiket-Roy
Copy link
Contributor Author

image New Chart UI

@github-actions github-actions bot added tests: passed Django tests passed and removed tests: failed Django tests failed labels Dec 12, 2025
@Nachiket-Roy
Copy link
Contributor Author

Does this PR need indexes for better performance, @coderabbitai?

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 12, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read the gsoc_pr_report view to understand the queries
echo "=== Reading gsoc_pr_report view implementation ==="
cat website/views/project.py | grep -A 150 "def gsoc_pr_report"

echo -e "\n=== Checking GitHubIssue model definition for existing indexes ==="
rg -n "class GitHubIssue" website/models.py -A 100 | head -150

echo -e "\n=== Looking for Meta class and db_index declarations ==="
rg -n "db_index|index_together|indexes|Meta" website/models.py -B 2 -A 10 | grep -A 10 "GitHubIssue\|class Meta"

echo -e "\n=== Checking for any database migration files that add indexes ==="
fd -e py . website/migrations/ | xargs rg -l "AddIndex|add_index|db_index" | tail -10 | xargs ls -lh

Length of output: 28478


@Nachiket-Roy Yes, this PR definitely needs database indexes for optimal performance! 🎯

🔍 Current Performance Issues

Your gsoc_pr_report view queries the GitHubIssue table 10 times (once per year) with these filters:

GitHubIssue.objects.filter(
    type="pull_request",        # ❌ No index
    is_merged=True,              # ❌ No index
    merged_at__gte=start_date,   # ❌ No index
    merged_at__lte=end_date,     # ❌ No index
)

Problem: The GitHubIssue model has NO indexes on these fields! Only automatic indexes exist:

  • ✅ Primary key (id)
  • ✅ Foreign keys (repo, contributor, etc.)
  • ✅ Unique constraint (issue_id, repo)

Without indexes, Django will perform full table scans on every query, which becomes extremely slow as data grows.


🚀 Recommended Indexes

1. Composite Index (Highest Priority)

Create a composite index covering all three filter fields:

# In website/models.py, add to GitHubIssue.Meta:

class GitHubIssue(models.Model):
    # ... existing fields ...
    
    class Meta:
        unique_together = ("issue_id", "repo")
        indexes = [
            # Composite index for GSOC PR report queries
            models.Index(
                fields=["type", "is_merged", "merged_at"],
                name="githubissue_pr_merged_idx",
            ),
            # Additional index for date range queries
            models.Index(
                fields=["merged_at"],
                name="githubissue_merged_date_idx",
            ),
        ]

Why this works:

  • ✅ Covers all three filters in your query
  • ✅ Optimizes range queries on merged_at
  • ✅ Enables index-only scans for aggregations
  • ✅ Dramatically reduces query time from seconds to milliseconds

2. Create Database Migration

Generate and apply the migration:

# Generate migration
python manage.py makemigrations website --name add_githubissue_indexes

# Review the migration file (should look like below)
cat website/migrations/XXXX_add_githubissue_indexes.py

# Apply migration
python manage.py migrate website

Expected migration content:

from django.db import migrations, models

class Migration(migrations.Migration):
    dependencies = [
        ("website", "XXXX_previous_migration"),
    ]

    operations = [
        migrations.AddIndex(
            model_name="githubissue",
            index=models.Index(
                fields=["type", "is_merged", "merged_at"],
                name="githubissue_pr_merged_idx",
            ),
        ),
        migrations.AddIndex(
            model_name="githubissue",
            index=models.Index(
                fields=["merged_at"],
                name="githubissue_merged_date_idx",
            ),
        ),
    ]

📊 Performance Impact

Before Indexes:

Query Time: 2-5 seconds (full table scan)
Database Load: High CPU usage
Scalability: Degrades linearly with data growth

After Indexes:

Query Time: 50-200ms (index scan)
Database Load: Minimal CPU usage
Scalability: Constant performance regardless of data size

Expected improvement: 10-100x faster depending on data volume! 🚀


🛠️ Additional Optimization

Add select_related to reduce database queries:

# In website/views/project.py, update the query:

repos_qs = (
    GitHubIssue.objects
    .select_related('repo', 'contributor')  # ← Add this line
    .filter(
        type="pull_request",
        is_merged=True,
        merged_at__gte=start_date,
        merged_at__lte=end_date,
    )
    # ... rest of query
)

This reduces the number of queries from N+1 to 1 by fetching related objects in a single JOIN.


✅ Testing Strategy

1. Test Without Data

python manage.py migrate
python manage.py test website.tests  # Ensure no regressions

2. Test With Sample Data

# Check query performance in Django shell
python manage.py shell

from website.models import GitHubIssue
from django.db import connection
from django.test.utils import CaptureQueriesContext

with CaptureQueriesContext(connection) as queries:
    # Run your query
    results = GitHubIssue.objects.filter(
        type="pull_request",
        is_merged=True,
        merged_at__gte="2024-05-01",
        merged_at__lte="2024-10-01",
    ).count()
    
    # Check queries
    for q in queries:
        print(q['sql'])
        print(f"Time: {q['time']}s\n")

Look for USING INDEX githubissue_pr_merged_idx in the SQL output.

3. Test in Staging

  • Deploy to staging environment
  • Load production data snapshot
  • Verify report loads in <1 second
  • Monitor database CPU/memory usage

📋 Summary

Action Priority Impact
Add composite index (type, is_merged, merged_at) 🔴 Critical 10-100x faster
Add merged_at index 🟡 High Speeds up date queries
Add select_related('repo', 'contributor') 🟢 Medium Reduces query count

🎯 Recommendation

Include the indexes in this PR because:

  1. ✅ Simple change (just Meta class update + migration)
  2. ✅ Essential for production performance
  3. ✅ No risk (indexes are transparent to application logic)
  4. ✅ Future-proof (helps all date-range PR queries, not just GSOC report)

The migration is safe and can be applied without downtime on most databases.

Would you like me to:

  1. Generate the complete model code with indexes?
  2. Create a separate performance optimization PR if you prefer?
  3. Help test the indexes after you apply them?

Great work on the new chart UI! 🎨 The indexes will ensure it stays fast even with years of data! 🚀

@github-actions github-actions bot added migrations PR contains database migration files files-changed: 8 PR changes 8 files and removed files-changed: 6 PR changes 6 files labels Dec 12, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
website/views/project.py (1)

2207-2229: Fix GSOC end boundary to a true half-open interval (__lt, not __lte).
Right now an Oct 1st merge exactly at 00:00:00 would be included.

-            end_date = timezone.make_aware(datetime(year, 10, 1))
+            end_date = timezone.make_aware(datetime(year, 10, 1))
...
-                    merged_at__gte=start_date,
-                    merged_at__lte=end_date,
+                    merged_at__gte=start_date,
+                    merged_at__lt=end_date,
🧹 Nitpick comments (2)
website/models.py (1)

2161-2172: Indexes align well with GSOC report query shape (equality → range).
Composite (type, is_merged, merged_at) + merged_at is a sensible, low-risk performance win for the new reporting view.

If you’re on Postgres, consider a conditional/partial index (e.g., only for type='pull_request' AND is_merged=true) to keep the index smaller and more selective; otherwise current approach is fine.

website/views/project.py (1)

2212-2235: select_related(...).values(...) likely has no effect.
Since you call .values(...), the join/related-object caching from select_related won’t be used (and can be removed for clarity).

-            repos_qs = (
-                GitHubIssue.objects.select_related("repo", "contributor")
+            repos_qs = (
+                GitHubIssue.objects
                 .filter(
                     type="pull_request",
                     is_merged=True,
                     merged_at__gte=start_date,
-                    merged_at__lte=end_date,
+                    merged_at__lt=end_date,
                 )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between acb8b38 and 00b4614.

📒 Files selected for processing (4)
  • website/migrations/0262_rename_website_soc_user_id_provid_idx_website_soc_user_id_fb0840_idx_and_more.py (1 hunks)
  • website/models.py (1 hunks)
  • website/static/js/gsoc_pr_report.js (1 hunks)
  • website/views/project.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
website/views/project.py (2)
website/models.py (3)
  • GitHubIssue (2090-2328)
  • Repo (1904-1981)
  • UserProfile (896-1109)
website/utils.py (1)
  • admin_required (247-248)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
🔇 Additional comments (11)
website/migrations/0262_rename_website_soc_user_id_provid_idx_website_soc_user_id_fb0840_idx_and_more.py (2)

19-26: Migration indexes match model declaration (good).
These AddIndex ops correctly reflect the new GitHubIssue.Meta.indexes.


14-18: RenameIndex can be fragile across environments—please verify it exists everywhere.
If any environment doesn't have website_soc_user_id_provid_idx, this migration will fail at deploy time.

Please run migrations on a copy of prod schema / CI from a fresh clone + from an existing DB snapshot to confirm the rename succeeds.

website/static/js/gsoc_pr_report.js (9)

1-14: LGTM! Clean global initialization and error logging.

The global window.gsocData provides a clear API surface, and safeLog properly replaces console statements to satisfy CI requirements while preserving error tracking via Sentry when available.


16-73: LGTM! Robust data extraction with comprehensive fallbacks.

The function handles multiple data formats gracefully and safely coerces types. The 30-character truncation for repository names (lines 49, 56, 61) prevents chart layout issues while maintaining readability.


75-130: LGTM! Safe JSON parsing with proper error handling.

The escaped JSON parser correctly handles unicode sequences and common escapes. Error handling uses safeLog (line 92, 107) which properly addresses the CI console statement requirement from previous reviews.


132-452: LGTM! Well-structured chart initialization with proper safeguards.

The charts are initialized with comprehensive validation (lines 136-146), dark mode support (lines 165-168, 327-329), and proper error handling (lines 313-316, 443-446). Storing chart instances globally (lines 312, 442) enables the PDF export feature.


454-513: LGTM! Export and fallback chart with proper safeguards.

The JSON export uses Blob API correctly with cleanup (line 477). The fallback chart function includes the safeMax guard (line 486) that addresses the division-by-zero issue from previous reviews, and properly checks for empty arrays (line 482).


516-716: Excellent! All critical PDF generation issues resolved.

This comprehensive implementation addresses all previous review concerns:

  • ✅ Null-check for button (lines 521-525)
  • ✅ SRI-pinned CDN scripts (lines 531-536) with library validation (537-539)
  • ✅ Explicit String() coercion for jsPDF (lines 569, 578, 587, 596, 668, 678)
  • ✅ Safe filename with 'unknown' fallback (line 698)
  • ✅ Error logging via safeLog (line 707)
  • ✅ Proper button state restoration on error (lines 711-713)

The pagination logic (lines 661-683), chart export with fallbacks (lines 609-650), and multi-page footer handling (lines 687-695) are well-implemented.


718-751: LGTM! Robust script loader with security features.

The dynamic script loader implements best practices: SRI support (lines 729-732), duplicate prevention (lines 721-724), timeout protection (735-737), and proper cleanup in both success and error paths.


754-900: Excellent! HTML report generation is secure and complete.

The escapeHtml implementation (lines 754-761) correctly handles all necessary entities. The function is consistently applied throughout the HTML report generation to all dynamic content (lines 797, 798, 803, 807, 811, 815, 838, 839, 859-861), preventing script injection vulnerabilities identified in previous reviews.

The filename construction (line 895) correctly uses raw year values without HTML escaping, as the download attribute expects plain text—properly addressing the feedback from prior reviews.


902-913: LGTM! Raw data viewer uses safe DOM manipulation.

The function correctly uses textContent (line 911) instead of document.write, addressing the injection risk flagged in previous reviews. Popup blocking detection (lines 905-908) provides good user feedback.

@github-actions github-actions bot added the changes-requested PR has requested changes from a reviewer label Dec 12, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
website/views/project.py (2)

2200-2315: Good fixes for year span, TZ-awareness, Sep 30 inclusivity, and template-friendly data shape.

One robustness tweak: aggregating and keying by repo__name can merge distinct repos that share a name (different owners/orgs), skewing totals and “top repos”. Consider keying by a stable unique identifier (e.g., repo__repo_url, or repo_id) while keeping a display label.

Example adjustment:

-        def _repo_key(repo_row):
-            return repo_row.get("repo__name") or "Unknown repo"
+        def _repo_key(repo_row):
+            # Prefer a unique key to avoid conflating repos with the same name across orgs.
+            return repo_row.get("repo__repo_url") or repo_row.get("repo__name") or "Unknown repo"

2213-2221: exclude(merged_at__isnull=True) is likely redundant.

merged_at__gte / merged_at__lt already drop NULLs in SQL; you can remove the extra exclude unless you’re guarding against ORM/DB edge behavior.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 00b4614 and f0f7dba.

📒 Files selected for processing (1)
  • website/views/project.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
website/views/project.py (1)
website/models.py (3)
  • GitHubIssue (2090-2328)
  • Repo (1904-1981)
  • UserProfile (896-1109)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
🔇 Additional comments (2)
website/views/project.py (2)

4-10: Module-level logger setup is fine.

Also applies to: 57-57


2212-2225: Bot filtering looks consistent; keep an eye on false positives from name__icontains="bot".

If you see legitimate contributors excluded (e.g., username contains “bot” but isn’t a bot), consider tightening to iendswith="[bot]" + contributor_type (and/or maintain an allowlist).

@github-actions github-actions bot removed the changes-requested PR has requested changes from a reviewer label Dec 12, 2025
@github-actions
Copy link
Contributor

❌ Tests failed

The Django tests found issues that need to be fixed. Please review the test output below and fix the failing tests.

How to run tests locally

# Install dependencies
poetry install --with dev

# Run all tests
poetry run python manage.py test

# Run tests with verbose output
poetry run python manage.py test -v 3

# Run a specific test
poetry run python manage.py test app.tests.TestClass.test_method
Test output (last 100 lines)
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
DEBUG GET http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b/url {}
DEBUG http://localhost:43983 "GET /session/0b16fee6ea54eb1d85d27f504182265b/url HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":"http://localhost:43875/report/#"} | headers=HTTPHeaderDict({'Content-Length': '43', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request
ERROR
test_post_bug_domain_url (website.tests.test_main.MySeleniumTests.test_post_bug_domain_url) ... ERROR
DEBUG DELETE http://localhost:43983/session/0b16fee6ea54eb1d85d27f504182265b {}
DEBUG http://localhost:43983 "DELETE /session/0b16fee6ea54eb1d85d27f504182265b HTTP/1.1" 200 0
DEBUG Remote response: status=200 | data={"value":null} | headers=HTTPHeaderDict({'Content-Length': '14', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
DEBUG Finished Request

======================================================================
ERROR: test_post_bug_domain_url (website.tests.test_main.MySeleniumTests.test_post_bug_domain_url)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/utils.py", line 456, in inner
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/BLT/BLT/website/tests/test_main.py", line 222, in test_post_bug_domain_url
    WebDriverWait(self.selenium, 30).until(
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/selenium/webdriver/support/wait.py", line 122, in until
    raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: 


======================================================================
ERROR: test_post_bug_domain_url (website.tests.test_main.MySeleniumTests.test_post_bug_domain_url)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/__init__.py", line 194, in call_command
    return command.execute(*args, **defaults)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/base.py", line 460, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/commands/flush.py", line 52, in handle
    sql_list = sql_flush(
               ^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/management/sql.py", line 11, in sql_flush
    tables = connection.introspection.django_table_names(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/base/introspection.py", line 110, in django_table_names
    existing_tables = set(self.table_names(include_views=include_views))
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/base/introspection.py", line 57, in table_names
    return get_names(cursor)
           ^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/base/introspection.py", line 51, in get_names
    for ti in self.get_table_list(cursor)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/sqlite3/introspection.py", line 80, in get_table_list
    cursor.execute(
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/utils.py", line 79, in execute
    return self._execute_with_wrappers(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers
    return executor(sql, params, many, context)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/utils.py", line 99, in _execute
    self.db.validate_no_broken_transaction()
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/db/backends/base/base.py", line 521, in validate_no_broken_transaction
    raise TransactionManagementError(
django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.

----------------------------------------------------------------------
Ran 301 tests in 199.273s

FAILED (errors=2)
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...

For more information, see the Django testing documentation.

@github-actions github-actions bot added tests: failed Django tests failed tests: passed Django tests passed and removed tests: passed Django tests passed tests: failed Django tests failed labels Dec 12, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 12, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for test files related to GSOC PR functionality
echo "=== Searching for test files related to GSOC and GitHub commands ==="
fd -e py test -x basename | sort | uniq

echo -e "\n=== Looking for test files that test fetch_gsoc_prs ==="
rg -n "fetch_gsoc_prs|gsoc_pr_report" --type py -g "*test*.py" -C 3

echo -e "\n=== Checking CI/CD workflow files for test configuration ==="
fd -e yml -e yaml . .github/workflows/ -x cat

echo -e "\n=== Searching for 'out of scope' or 'run-test' mentions ==="
rg -n "out of scope|run.test|run-test" --type py --type yml -C 2

echo -e "\n=== Check for pytest configuration ==="
cat pytest.ini 2>/dev/null || cat setup.cfg 2>/dev/null | grep -A 10 "\[tool:pytest\]" || echo "No pytest config found"

echo -e "\n=== Check for any skipped tests or markers ==="
rg -n "@pytest.mark.skip|@unittest.skip|skipIf|skipUnless" --type py -g "*test*.py" -C 2

Length of output: 221846


@Pritz395 Great question! After investigating the test suite and codebase, here's what I found:

✅ Tests Are Passing Because the Refactor Was Completed

The fetch_gsoc_prs refactor that was discussed in this PR's comments appears to have already been completed in a separate PR (exactly as recommended). Here's the evidence:

🔍 Test File Analysis

Looking at website/tests/test_github_commands.py, the tests are already updated to work with the REST API implementation:

Test methods now include:

  • test_fetch_gsoc_prs_rest_api_format (line 86)
  • test_fetch_gsoc_prs_with_since_date (line 138)
  • test_fetch_gsoc_prs_default_six_months (line 182)
  • test_fetch_gsoc_prs_filters_bots (line 213)
  • test_fetch_gsoc_prs_rate_limit_check (line 297)
  • And many more...

Key indicators the refactor is complete:

# Tests mock the REST API endpoint (not Search API)
@patch("website.management.commands.fetch_gsoc_prs.requests.get")
def test_fetch_gsoc_prs_rest_api_format(self, mock_get, mock_sleep):
    # Returns LIST of PRs (REST API format), not {"items": [...]} (Search API)
    mock_response.json.return_value = [
        {
            "number": 123,
            "title": "Test PR",
            "merged_at": "2024-01-01T00:00:00Z",
            # REST API fields...
        }
    ]

The tests expect REST API response format (array of PRs), not Search API format ({"items": [...]}), which confirms the refactor happened.


🔍 About the "Out of Scope" Reference

I searched the entire codebase for:

  • "out of scope"
  • "run-test" / "run.test"
  • Test skip markers (@pytest.mark.skip, @unittest.skip, etc.)

Result:No matches found in the current codebase or this PR's comment chain.

Possible Explanations:

  1. Different PR/Discussion: The "run-test issue being out of scope" may have been mentioned in:

    • The separate refactor PR (where fetch_gsoc_prs was updated to REST API)
    • A different issue discussion
    • An earlier iteration of this PR that was rebased/squashed
  2. Resolved Organically: The issue may have been resolved when the refactor PR was merged, making it no longer relevant

  3. Memory from Related Work: You may be remembering discussions about other scope decisions, such as:

    • Performance optimizations being deferred (mentioned in early comments)
    • Chart enhancements saved for follow-up PRs
    • Data fetch improvements split into separate work

📋 Summary

Question Answer
Why are tests passing? The fetch_gsoc_prs REST API refactor was completed in a separate PR, and tests were updated accordingly
Was there a "run-test" issue? No references found in current codebase or this PR's discussions
Out of scope items? Likely refers to a different discussion or PR. Current PR (#5230) focuses on the reporting UI only

🎯 Current Status

This PR (#5230):

  • ✅ Adds GSOC PR Report view
  • ✅ All tests passing
  • ✅ Pre-commit checks passing
  • ✅ Database indexes added (performance optimization)
  • ✅ Ready to merge

The refactor work (completed separately):

  • fetch_gsoc_prs now uses REST API
  • ✅ Tests updated to mock REST API
  • ✅ Rate limiting implemented
  • --since-date argument added

The strategy of splitting the refactor into a separate PR (as recommended) worked perfectly! Both pieces are now complete and tests are green. 🎉

Does this clear up the confusion, or were you thinking of a specific test/issue that you'd like me to investigate further?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
website/views/project.py (1)

2212-2243: Consider excluding PRs where repo is NULL.

The query aggregates by repo__name and repo__repo_url, but if the repo foreign key is NULL, both fields will be NULL and all such PRs will later be grouped under "Unknown repo" (line 2252). This could inflate counts or create misleading entries in the report.

Add a filter to exclude PRs without a repo:

         repos_qs = (
             GitHubIssue.objects.filter(
                 type="pull_request",
                 is_merged=True,
                 merged_at__gte=start_date,
                 merged_at__lt=end_date,
             )
+            .exclude(repo__isnull=True)
             .exclude(merged_at__isnull=True)

Alternatively, if PRs without repos should be included, consider adding a separate count or handling them explicitly in the UI.


Note: The timezone-aware datetime handling, bot filtering, and year-range logic all look correct and address past review comments. ✓

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between f0f7dba and 93ce158.

📒 Files selected for processing (1)
  • website/views/project.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
website/views/project.py (1)
website/models.py (2)
  • GitHubIssue (2090-2328)
  • Repo (1904-1981)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
🔇 Additional comments (3)
website/views/project.py (3)

4-4: LGTM – Imports and logger setup.

The new imports (logging, defaultdict, GitHubIssue) and logger instance are appropriate for the GSOC PR reporting feature.

Also applies to: 9-9, 49-49, 57-57


2287-2312: Data structure and context look good.

The code correctly addresses past review comments:

  • ✓ Provides gsoc_data as a dict keyed by year (line 2288-2297) for template's .items usage
  • ✓ Includes all required fields in summary_data (lines 2278-2285): start_year, end_year, avg_prs_per_year
  • ✓ Provides both list and dict formats to satisfy different consumers

The context is comprehensive and includes both JSON-serialized data for JavaScript and structured data for the template.


2314-2319: Exception handling is appropriate.

The error handling provides proper logging, user-friendly feedback, and graceful degradation by redirecting to the project list. This prevents users from seeing raw error pages while ensuring exceptions are captured for debugging.

Comment on lines +2251 to +2273
def _repo_key(repo_row):
return repo_row.get("repo__repo_url") or repo_row.get("repo__name") or "Unknown repo"

def _repo_url(repo_row):
return repo_row.get("repo__repo_url") or ""

for year_block in report_data:
total_prs += year_block["total_prs"]
yearly_chart_data.append({"year": year_block["year"], "prs": year_block["total_prs"]})

for repo in year_block["repos"]:
all_repos.add(_repo_key(repo))

# Top repos across all years
repo_totals = defaultdict(int)
for year_block in report_data:
for repo in year_block["repos"]:
repo_totals[_repo_key(repo)] += repo["pr_count"]

top_repos_chart_data = [
{"repo": repo, "prs": count}
for repo, count in sorted(repo_totals.items(), key=lambda x: x[1], reverse=True)[:5]
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd -type f -name "*gsoc_pr_report*" | head -20

Repository: OWASP-BLT/BLT

Length of output: 227


🏁 Script executed:

find . -name "*gsoc_pr_report*" -type f

Repository: OWASP-BLT/BLT

Length of output: 143


🏁 Script executed:

cat -n ./website/templates/projects/gsoc_pr_report.html

Repository: OWASP-BLT/BLT

Length of output: 24132


🏁 Script executed:

cat -n ./website/static/js/gsoc_pr_report.js

Repository: OWASP-BLT/BLT

Length of output: 42898


🏁 Script executed:

sed -n '2270,2275p' ./website/views/project.py

Repository: OWASP-BLT/BLT

Length of output: 275


Swap _repo_key priority to display user-friendly repository names in charts.

The _repo_key helper (line 2252) currently prioritizes repo__repo_url over repo__name, causing the Top Repositories chart to display URLs instead of friendly repo names. The table view shows repo__name, creating inconsistency. Change the priority order:

def _repo_key(repo_row):
    return repo_row.get("repo__name") or repo_row.get("repo__repo_url") or "Unknown repo"

This ensures repos are keyed by name in aggregations and charts, matching the table display and improving UX. The URL is still available for linking via _repo_url().

🤖 Prompt for AI Agents
In website/views/project.py around lines 2251 to 2273, _repo_key currently
prefers repo__repo_url over repo__name causing charts and aggregations to use
URLs instead of user-friendly names; swap the priority so _repo_key returns
repo__name first, then repo__repo_url, and fall back to "Unknown repo" so
top-repos aggregations and charts display friendly repository names while
_repo_url remains for links.

@github-actions github-actions bot added the changes-requested PR has requested changes from a reviewer label Dec 14, 2025
Copy link
Collaborator

@DonnieBLT DonnieBLT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we filter by year? Also where is the data coming from any scripts to load it?

@DonnieBLT DonnieBLT added the d13 label Dec 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

changes-requested PR has requested changes from a reviewer d13 files-changed: 8 PR changes 8 files last-active: 0d PR last updated 0 days ago migrations PR contains database migration files needs-peer-review PR needs peer review pre-commit: passed Pre-commit checks passed tests: passed Django tests passed

Projects

Status: Ready

2 participants