Skip to content

Conversation

@Preetam-77
Copy link

@Preetam-77 Preetam-77 commented Dec 13, 2025

Summary

This PR introduces a Metadata Quality Checker Tool under the Tools/metadata_quality_checker/ folder.
The tool is designed to analyze and validate OWASP project metadata for completeness, consistency, and quality.

Features

  • Checks for required fields in metadata JSON files.
  • Validates data types and expected formats.
  • Scores metadata quality for each project.
  • Supports CLI arguments for specifying input files.
  • Provides sample metadata for testing.

Motivation

High-quality metadata is crucial for the OWASP Metadata Aggregation Project to provide accurate recommendations and analytics.
This tool helps contributors and maintainers ensure metadata is complete and standardized.

Files Added

  • Tools/metadata_quality_checker/checker.py
  • Tools/metadata_quality_checker/rules.py
  • Tools/metadata_quality_checker/score.py
  • Tools/metadata_quality_checker/sample_metadata.json
  • Tools/metadata_quality_checker/README.md

How to Test

  1. Navigate to Tools/metadata_quality_checker/
  2. Run the checker:
python checker.py --file sample_metadata.json


<!-- This is an auto-generated comment: release notes by coderabbit.ai -->

## Summary by CodeRabbit

## New Features
* Added a Metadata Quality Checker CLI tool for validating project metadata. The tool checks required fields (tags, project type, difficulty level, pitch quality, repository URL, and recent activity) and generates quality scores with status indicators.

<sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

@github-actions
Copy link
Contributor

👋 Hi @Preetam-77!

This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:

  • The PR author
  • DonnieBLT
  • coderabbitai
  • copilot

Once a valid peer review is submitted, this check will pass automatically. Thank you!

@github-actions github-actions bot added the needs-peer-review PR needs peer review label Dec 13, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 13, 2025

Walkthrough

Introduces a new Metadata Quality Checker Tool for OWASP projects. A Python CLI system analyzes project metadata files, validates them against defined rules, computes numeric quality scores, and reports findings. The tool comprises a main orchestrator, validation rules engine, scoring logic, sample data, and documentation.

Changes

Cohort / File(s) Summary
Documentation
metadata_quality_checker/README.md
Adds README describing the Metadata Quality Checker Tool, its validation checks, and usage instructions.
CLI & Orchestration
metadata_quality_checker/checker.py
Introduces main entry point with load_metadata() to resolve and parse metadata JSON files (with fallback to sample_metadata.json), and main() orchestrator that iterates projects, evaluates rules and scores, and outputs formatted reports.
Validation & Scoring
metadata_quality_checker/rules.py, metadata_quality_checker/score.py
Adds check_rules() function validating project fields (name, tags, type, level, pitch, repository URL, activity) and returning issue list; adds calculate_score() computing weighted numeric scores (0–100 range) and get_status() mapping scores to status labels ("good", "needs improvement", "poor").
Sample Data
metadata_quality_checker/sample_metadata.json
Provides JSON array with two sample OWASP projects demonstrating valid and invalid metadata examples.

Sequence Diagram

sequenceDiagram
    participant User as User/CLI
    participant Main as checker.main()
    participant Loader as load_metadata()
    participant Rules as check_rules()
    participant Score as calculate_score()<br/>& get_status()
    participant Output as Report Output

    User->>Main: Execute script
    Main->>Loader: Load metadata file
    Loader-->>Main: Parsed projects[]
    
    loop For each project
        Main->>Rules: Validate project metadata
        Rules-->>Main: Issues list
        
        Main->>Score: Calculate score from project
        Score-->>Main: Numeric score (0-100)
        
        Main->>Score: Get status for score
        Score-->>Main: Status (good/needs improvement/poor)
        
        Main->>Output: Print project report<br/>(name, score + status, issues)
    end
    
    Output-->>User: Complete quality report
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • rules.py: Date parsing logic for ISO-format last_commit field and 12-month inactivity detection; edge cases in field validation (empty strings, list length checks)
  • score.py: Verify weighting scheme (+10, +25, +15, +20, +15) sums correctly and threshold boundaries (80, 50) are appropriate
  • checker.py: File resolution logic, error handling for missing/malformed metadata, and integration between modules
  • sample_metadata.json: Ensure sample projects adequately demonstrate valid and invalid metadata patterns

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changeset, which introduces a new metadata quality checker tool with multiple supporting modules and sample data.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

❌ Pre-commit checks failed

The pre-commit hooks found issues that need to be fixed. Please run the following commands locally to fix them:

# Install pre-commit if you haven't already
pip install pre-commit

# Run pre-commit on all files
pre-commit run --all-files

# Or run pre-commit on staged files only
pre-commit run

After running these commands, the pre-commit hooks will automatically fix most issues.
Please review the changes, commit them, and push to your branch.

💡 Tip: You can set up pre-commit to run automatically on every commit by running:

pre-commit install
Pre-commit output
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version.  Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this.  if it does not -- consider reporting an issue to that repo.
[INFO] Initializing environment for https://github.com/pycqa/isort.
[WARNING] repo `https://github.com/pycqa/isort` uses deprecated stage names (commit, merge-commit, push) which will be removed in a future version.  Hint: often `pre-commit autoupdate --repo https://github.com/pycqa/isort` will fix this.  if it does not -- consider reporting an issue to that repo.
[INFO] Initializing environment for https://github.com/astral-sh/ruff-pre-commit.
[INFO] Initializing environment for https://github.com/djlint/djLint.
[INFO] Initializing environment for local.
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pycqa/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/astral-sh/ruff-pre-commit.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/djlint/djLint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
check python ast.........................................................Passed
check builtin type constructor use.......................................Passed
check yaml...............................................................Passed
fix python encoding pragma...............................................Passed
mixed line ending........................................................Passed
isort....................................................................Failed
- hook id: isort
- files were modified by this hook

Fixing /home/runner/work/BLT/BLT/metadata_quality_checker/checker.py


For more information, see the pre-commit documentation.

@github-actions github-actions bot added the pre-commit: failed Pre-commit checks failed label Dec 13, 2025
@github-actions
Copy link
Contributor

❌ Tests failed

The Django tests found issues that need to be fixed. Please review the test output below and fix the failing tests.

How to run tests locally

# Install dependencies
poetry install --with dev

# Run all tests
poetry run python manage.py test

# Run tests with verbose output
poetry run python manage.py test -v 3

# Run a specific test
poetry run python manage.py test app.tests.TestClass.test_method
Test output (last 100 lines)
    return self.generic(
           ^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/client.py", line 676, in generic
    return self.request(**r)
           ^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/client.py", line 1092, in request
    self.check_exception(response)
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/client.py", line 805, in check_exception
    raise exc_value
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
    response = get_response(request)
               ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/core/handlers/base.py", line 220, in _get_response
    response = response.render()
               ^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/response.py", line 114, in render
    self.content = self.rendered_content
                   ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/response.py", line 92, in rendered_content
    return template.render(context, self._request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/backends/django.py", line 107, in render
    return self.template.render(context)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 171, in render
    return self._render(context)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/utils.py", line 114, in instrumented_test_render
    return self.nodelist.render(context)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in render
    return SafeString("".join([node.render_annotated(context) for node in self]))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in <listcomp>
    return SafeString("".join([node.render_annotated(context) for node in self]))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 969, in render_annotated
    return self.render(context)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/loader_tags.py", line 159, in render
    return compiled_parent._render(context)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/test/utils.py", line 114, in instrumented_test_render
    return self.nodelist.render(context)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in render
    return SafeString("".join([node.render_annotated(context) for node in self]))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in <listcomp>
    return SafeString("".join([node.render_annotated(context) for node in self]))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 969, in render_annotated
    return self.render(context)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/loader_tags.py", line 65, in render
    result = block.nodelist.render(context)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in render
    return SafeString("".join([node.render_annotated(context) for node in self]))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in <listcomp>
    return SafeString("".join([node.render_annotated(context) for node in self]))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 969, in render_annotated
    return self.render(context)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/defaulttags.py", line 327, in render
    return nodelist.render(context)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in render
    return SafeString("".join([node.render_annotated(context) for node in self]))
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 1008, in <listcomp>
    return SafeString("".join([node.render_annotated(context) for node in self]))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 969, in render_annotated
    return self.render(context)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/defaulttags.py", line 243, in render
    nodelist.append(node.render_annotated(context))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/base.py", line 969, in render_annotated
    return self.render(context)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/template/defaulttags.py", line 480, in render
    url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/base.py", line 88, in reverse
    return resolver._reverse_with_prefix(view, prefix, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pypoetry/virtualenvs/blt-yuw0N2NF-py3.11/lib/python3.11/site-packages/django/urls/resolvers.py", line 831, in _reverse_with_prefix
    raise NoReverseMatch(msg)
django.urls.exceptions.NoReverseMatch: Reverse for 'profile' with keyword arguments '{'slug': ''}' not found. 1 pattern(s) tried: ['profile/(?P<slug>[^/]+)/$']

----------------------------------------------------------------------
Ran 40 tests in 26.787s

FAILED (errors=1)
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...

For more information, see the Django testing documentation.

@github-actions github-actions bot added the tests: failed Django tests failed label Dec 13, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
metadata_quality_checker/score.py (1)

7-8: Inconsistent dictionary access style.

Line 7 uses project.get("tags") for the type check but then switches to project["tags"] for length. While functionally safe due to short-circuit evaluation, consider using consistent .get() access for readability:

-    if isinstance(project.get("tags"), list) and len(project["tags"]) >= 2:
+    if isinstance(project.get("tags"), list) and len(project.get("tags", [])) >= 2:
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between c2565fb and b4d0804.

📒 Files selected for processing (5)
  • metadata_quality_checker/README.md (1 hunks)
  • metadata_quality_checker/checker.py (1 hunks)
  • metadata_quality_checker/rules.py (1 hunks)
  • metadata_quality_checker/sample_metadata.json (1 hunks)
  • metadata_quality_checker/score.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
metadata_quality_checker/checker.py (2)
metadata_quality_checker/rules.py (1)
  • check_rules (4-38)
metadata_quality_checker/score.py (2)
  • calculate_score (1-23)
  • get_status (26-31)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
🔇 Additional comments (4)
metadata_quality_checker/sample_metadata.json (1)

1-20: Sample data effectively demonstrates validation.

The two projects provide good contrast: one with multiple quality issues (empty tags, short pitch, inactive) and one with complete metadata. This effectively showcases the tool's validation capabilities.

metadata_quality_checker/score.py (1)

26-31: Status mapping logic is clear and correct.

The thresholds appropriately categorize metadata quality into three tiers.

metadata_quality_checker/checker.py (2)

24-43: Main orchestration logic is clear and correct.

The function properly coordinates the validation, scoring, and reporting workflow. The output formatting with emojis makes the results easy to read.


46-47: Correct entry point implementation.

Comment on lines +8 to +21
def load_metadata():
# If user passes a file path: python checker.py file.json
if len(sys.argv) > 1:
metadata_path = Path(sys.argv[1])
else:
# Default to sample file
metadata_path = Path(__file__).parent / "sample_metadata.json"

if not metadata_path.exists():
print(f"❌ Metadata file not found: {metadata_path}")
sys.exit(1)

with open(metadata_path, "r", encoding="utf-8") as f:
return json.load(f)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add JSON error handling and clarify CLI interface.

Issues identified:

  1. Discrepancy with PR objectives: The PR description states "Run: python checker.py --file sample_metadata.json" suggesting a --file flag, but the implementation uses a positional argument. Consider adding proper argument parsing with argparse:
import argparse

def load_metadata():
    parser = argparse.ArgumentParser(description="OWASP Metadata Quality Checker")
    parser.add_argument("--file", type=str, help="Path to metadata JSON file")
    args = parser.parse_args()
    
    if args.file:
        metadata_path = Path(args.file)
    else:
        metadata_path = Path(__file__).parent / "sample_metadata.json"
    # ... rest of logic
  1. Missing JSON error handling: Malformed JSON will crash with an unhandled json.JSONDecodeError:
     with open(metadata_path, "r", encoding="utf-8") as f:
-        return json.load(f)
+        try:
+            data = json.load(f)
+            if not isinstance(data, list):
+                print(f"❌ Metadata file must contain a JSON array")
+                sys.exit(1)
+            return data
+        except json.JSONDecodeError as e:
+            print(f"❌ Invalid JSON in metadata file: {e}")
+            sys.exit(1)
🤖 Prompt for AI Agents
In metadata_quality_checker/checker.py around lines 8 to 21, replace the
positional-arg logic with argparse to accept a --file flag (use args.file if
provided, otherwise default to sample_metadata.json), keep the existence check,
and add JSON error handling: when opening/reading the file, catch
json.JSONDecodeError (and optionally OSError) to print a clear error message
including the file path and the JSON error, then sys.exit(1); ensure behavior
and exit codes remain consistent when the file is missing or invalid.

Comment on lines +14 to +17
## Usage

```bash
python checker.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Usage documentation is incomplete.

The usage section doesn't document the CLI argument for specifying a custom metadata file. Based on the implementation in checker.py (line 10-11), update the documentation to show:

## Usage

```bash
# Use the default sample file
python checker.py

# Analyze a custom metadata file
python checker.py path/to/metadata.json

Note: The PR objectives mention a `--file` flag, but the implementation uses a positional argument. Consider whether the implementation should be updated to match the documented behavior.

<details>
<summary>🤖 Prompt for AI Agents</summary>

In metadata_quality_checker/README.md around lines 14 to 17, the Usage section
omits the CLI positional argument for specifying a custom metadata file and
mismatches the PR note about a --file flag; update the README to show both
examples: running with the default sample (python checker.py) and running with a
custom metadata file path (python checker.py path/to/metadata.json), and add a
short note that the script currently accepts a positional file argument (not a
--file flag) so maintainers can decide whether to change the implementation to
accept a --file/--path option instead of a positional argument.


</details>

<!-- fingerprinting:phantom:poseidon:puma -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +27 to +36
last_commit = project.get("last_commit")
if last_commit:
try:
commit_date = datetime.fromisoformat(last_commit)
if commit_date < datetime.now() - timedelta(days=365):
issues.append("Project inactive (no commits in last 12 months)")
except ValueError:
issues.append("Invalid last_commit date format")
else:
issues.append("Missing activity data")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Timezone-aware datetime comparison may cause TypeError.

Line 31 compares commit_date with datetime.now(), which is timezone-naive. If last_commit contains timezone information (e.g., "2025-11-20T10:00:00+00:00"), datetime.fromisoformat() returns a timezone-aware datetime, causing a TypeError when compared with the naive datetime.now().

Apply this diff to handle both timezone-aware and naive datetimes:

         try:
             commit_date = datetime.fromisoformat(last_commit)
+            # Make comparison timezone-aware if needed
+            now = datetime.now(commit_date.tzinfo) if commit_date.tzinfo else datetime.now()
-            if commit_date < datetime.now() - timedelta(days=365):
+            if commit_date < now - timedelta(days=365):
                 issues.append("Project inactive (no commits in last 12 months)")

Alternatively, standardize on UTC:

+from datetime import datetime, timedelta, timezone
...
         try:
             commit_date = datetime.fromisoformat(last_commit)
+            # Convert to UTC for comparison
+            if commit_date.tzinfo is None:
+                commit_date = commit_date.replace(tzinfo=timezone.utc)
-            if commit_date < datetime.now() - timedelta(days=365):
+            if commit_date < datetime.now(timezone.utc) - timedelta(days=365):
                 issues.append("Project inactive (no commits in last 12 months)")

@github-project-automation github-project-automation bot moved this from Backlog to Ready in 📌 OWASP BLT Project Board Dec 13, 2025
@github-actions github-actions bot added the last-active: 0d PR last updated 0 days ago label Dec 14, 2025
@DonnieBLT
Copy link
Collaborator

please move this to the OWASP-metadata project

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

files-changed: 5 PR changes 5 files last-active: 0d PR last updated 0 days ago needs-peer-review PR needs peer review pre-commit: failed Pre-commit checks failed tests: failed Django tests failed

Projects

Status: Ready

Development

Successfully merging this pull request may close these issues.

2 participants