Skip to content

Conversation

@anuragthippani1
Copy link

@anuragthippani1 anuragthippani1 commented Dec 11, 2025

Adds resilient OpenAI integration (key check, timeout, fenced-JSON cleanup, field validation) for generate_github_issue.
Shows an error banner on the AI prompt page and preserves form inputs when AI fails.
Adds unit tests for success, invalid JSON, and missing API key cases.
Requires OPENAI_API_KEY set (non-placeholder).
Tests: poetry run python manage.py test website.tests.test_generate_github_issue (pass).

Summary by CodeRabbit

  • Bug Fixes

    • Improved error display with clear, user-friendly messages shown in the form interface
    • Enhanced API response parsing to handle various response formats reliably
    • Added validation for proper API configuration before issue generation
  • New Features

    • Form fields now automatically pre-populate with previously entered description and repository URL
  • Tests

    • Added unit tests covering issue generation success and error scenarios

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions
Copy link
Contributor

👋 Hi @anuragthippani1!

This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:

  • The PR author
  • DonnieBLT
  • coderabbitai
  • copilot

Once a valid peer review is submitted, this check will pass automatically. Thank you!

@github-actions github-actions bot added the needs-peer-review PR needs peer review label Dec 11, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 11, 2025

Walkthrough

Three files modified to improve error handling in GitHub issue generation: template updated to display errors and pre-fill form fields, view functions enhanced with API key validation and robust JSON response parsing, and unit tests added to cover error scenarios.

Changes

Cohort / File(s) Summary
Template error handling
website/templates/github_issue_prompt.html
Added conditional error alert rendering; pre-filled description textarea and repository_url input field with template variables.
View layer error handling and validation
website/views/issue.py
Modified get_github_issue to render template with error context on failure (HTTP 500) instead of JSON response; enhanced generate_github_issue with OpenAI API key validation, client timeout configuration, and robust JSON parsing that unwraps fenced code blocks.
Test coverage
website/tests/test_generate_github_issue.py
Added unit tests for generate_github_issue covering successful OpenAI response parsing, invalid JSON handling, and missing API key scenarios; mocks OpenAI interactions and verifies error signaling.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • API key validation logic: New conditional checks and validation rules in generate_github_issue require verification of intent and coverage
  • JSON parsing robustness: Added code block unwrapping logic needs review to ensure correctness with various wrapped formats
  • Error flow changes: Modified error handling paths from JSON response to template rendering requires tracing through the error scenarios
  • Test comprehensiveness: Verify test cases adequately cover the new validation and parsing scenarios

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: improving both user experience (UX) and robustness of AI-powered bug issue generation, which directly aligns with the PR's focus on error handling, API validation, and template enhancements.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

❌ Pre-commit checks failed

The pre-commit hooks found issues that need to be fixed. Please run the following commands locally to fix them:

# Install pre-commit if you haven't already
pip install pre-commit

# Run pre-commit on all files
pre-commit run --all-files

# Or run pre-commit on staged files only
pre-commit run

After running these commands, the pre-commit hooks will automatically fix most issues.
Please review the changes, commit them, and push to your branch.

💡 Tip: You can set up pre-commit to run automatically on every commit by running:

pre-commit install
Pre-commit output
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[WARNING] repo `https://github.com/pre-commit/pre-commit-hooks` uses deprecated stage names (commit, push) which will be removed in a future version.  Hint: often `pre-commit autoupdate --repo https://github.com/pre-commit/pre-commit-hooks` will fix this.  if it does not -- consider reporting an issue to that repo.
[INFO] Initializing environment for https://github.com/pycqa/isort.
[WARNING] repo `https://github.com/pycqa/isort` uses deprecated stage names (commit, merge-commit, push) which will be removed in a future version.  Hint: often `pre-commit autoupdate --repo https://github.com/pycqa/isort` will fix this.  if it does not -- consider reporting an issue to that repo.
[INFO] Initializing environment for https://github.com/astral-sh/ruff-pre-commit.
[INFO] Initializing environment for https://github.com/djlint/djLint.
[INFO] Initializing environment for local.
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/pycqa/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/astral-sh/ruff-pre-commit.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for https://github.com/djlint/djLint.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Installing environment for local.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
check python ast.........................................................Passed
check builtin type constructor use.......................................Passed
check yaml...............................................................Passed
fix python encoding pragma...............................................Passed
mixed line ending........................................................Passed
isort....................................................................Passed
ruff.....................................................................Passed
ruff-format..............................................................Failed
- hook id: ruff-format
- files were modified by this hook

1 file reformatted, 465 files left unchanged


For more information, see the pre-commit documentation.

@github-actions github-actions bot added the pre-commit: failed Pre-commit checks failed label Dec 11, 2025
Comment on lines +2202 to +2204
api_key = os.getenv("OPENAI_API_KEY")
if not api_key or api_key.startswith("sk-proj-"):
return {"error": "OpenAI API key is not configured"}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The validation logic incorrectly rejects all modern OpenAI API keys because it checks if the key startswith("sk-proj-"), which is the standard prefix for new keys.
Severity: CRITICAL | Confidence: High

🔍 Detailed Analysis

The validation logic for the OpenAI API key is incorrect. The condition if not api_key or api_key.startswith("sk-proj-") will reject all legitimate, modern OpenAI API keys, as they use the sk-proj- prefix. This will cause the AI issue generation feature to fail for any user with a valid key, returning an "OpenAI API key is not configured" error. The likely intent was to reject a specific placeholder default value, but the implementation incorrectly blocks all valid keys matching this common pattern.

💡 Suggested Fix

The validation logic should be changed to correctly identify valid keys. Instead of rejecting keys that start with sk-proj-, the check should ensure the key is present and starts with the broader sk- prefix, for example: if not api_key or not api_key.startswith("sk-"):.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: website/views/issue.py#L2202-L2204

Potential issue: The validation logic for the OpenAI API key is incorrect. The condition
`if not api_key or api_key.startswith("sk-proj-")` will reject all legitimate, modern
OpenAI API keys, as they use the `sk-proj-` prefix. This will cause the AI issue
generation feature to fail for any user with a valid key, returning an "OpenAI API key
is not configured" error. The likely intent was to reject a specific placeholder default
value, but the implementation incorrectly blocks all valid keys matching this common
pattern.

Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 7136798

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 385df26 and dbe1975.

📒 Files selected for processing (3)
  • website/templates/github_issue_prompt.html (3 hunks)
  • website/tests/test_generate_github_issue.py (1 hunks)
  • website/views/issue.py (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
website/tests/test_generate_github_issue.py (1)
website/views/issue.py (1)
  • generate_github_issue (2200-2257)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Run Tests
  • GitHub Check: docker-test
🔇 Additional comments (9)
website/templates/github_issue_prompt.html (3)

38-43: LGTM!

The error alert implementation is well-structured with proper conditional rendering, accessibility attributes, and consistent Tailwind styling.


56-56: LGTM!

The textarea pre-fill correctly preserves user input when the form is re-rendered after an error.


72-72: LGTM!

The input value pre-fill correctly preserves the repository URL when the form is re-rendered after an error.

website/tests/test_generate_github_issue.py (3)

10-27: LGTM!

The success test case properly mocks the OpenAI client chain and validates both the result structure and API invocation. The test coverage is comprehensive.


29-42: LGTM!

The invalid JSON test case correctly validates error handling when OpenAI returns non-JSON content.


44-48: LGTM!

The missing API key test case correctly validates error handling when the environment variable is not configured.

website/views/issue.py (3)

2174-2183: LGTM!

The error handling correctly preserves user input and renders the prompt template with appropriate context and HTTP status code.


2206-2207: LGTM!

Adding a timeout to the OpenAI client is a good defensive practice that prevents the view from hanging indefinitely.


2231-2239: LGTM!

The enhanced parsing correctly handles OpenAI responses that wrap JSON in fenced code blocks. The logic properly strips formatting markers and handles case variations.

Comment on lines +2202 to +2204
api_key = os.getenv("OPENAI_API_KEY")
if not api_key or api_key.startswith("sk-proj-"):
return {"error": "OpenAI API key is not configured"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenAI API key prefixes valid formats sk-proj- project keys

💡 Result:

Short answer — current OpenAI API key prefixes/formats:

  • Legacy user secret keys: start with "sk-". [2]
  • Project secret keys (current default): start with "sk-proj-". Project keys are managed per project and appear redacted like "sk-abc...def" in responses. [1][4]

Notes: project-related IDs in the API use prefixes like "proj_" and "key_" for project and key identifiers (e.g., proj_abc, key_abc). [1][4]

Sources: [1] OpenAI Project API keys docs; [2] OpenAI Help Center — Incorrect API key; [4] OpenAI Admin API keys.


🏁 Script executed:

# Get context around the code snippet to understand the validation intent
cd /tmp/repo && sed -n '2190,2220p' website/views/issue.py 2>/dev/null || echo "File not found, listing structure..."

Repository: OWASP-BLT/BLT

Length of output: 170


🏁 Script executed:

# Search for the file and examine the context
find . -name "issue.py" -path "*/website/views/*" -type f

Repository: OWASP-BLT/BLT

Length of output: 81


🏁 Script executed:

sed -n '2195,2220p' ./website/views/issue.py

Repository: OWASP-BLT/BLT

Length of output: 1096


Remove the invalid API key prefix check.

The validation at line 2203 rejects keys starting with sk-proj-, which is the current standard OpenAI API key prefix for project-scoped keys. This check will incorrectly block users with valid project keys. The sk- prefix alone covers both legacy (sk-) and project-scoped (sk-proj-) keys, making the startswith("sk-proj-") check incorrect.

-        if not api_key or api_key.startswith("sk-proj-"):
+        if not api_key:
             return {"error": "OpenAI API key is not configured"}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
api_key = os.getenv("OPENAI_API_KEY")
if not api_key or api_key.startswith("sk-proj-"):
return {"error": "OpenAI API key is not configured"}
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
return {"error": "OpenAI API key is not configured"}
🤖 Prompt for AI Agents
In website/views/issue.py around lines 2202 to 2204, the code rejects API keys
starting with "sk-proj-", which are valid project-scoped OpenAI keys; remove the
invalid api_key.startswith("sk-proj-") check and instead either only verify
presence or, if you want a prefix check, validate with api_key.startswith("sk-")
so both legacy and project keys pass.

@github-project-automation github-project-automation bot moved this from Backlog to Ready in 📌 OWASP BLT Project Board Dec 11, 2025
@github-actions github-actions bot added tests: passed Django tests passed last-active: 0d PR last updated 0 days ago labels Dec 11, 2025
@github-actions
Copy link
Contributor

💬 Reminder: Unresolved Conversations

Hi @anuragthippani1!

This pull request has 2 unresolved conversations that need to be addressed.

Please review and resolve the pending discussions so we can move forward with merging this PR.

Thank you! 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

files-changed: 3 PR changes 3 files last-active: 0d PR last updated 0 days ago needs-peer-review PR needs peer review pre-commit: failed Pre-commit checks failed tests: passed Django tests passed

Projects

Status: Ready

Development

Successfully merging this pull request may close these issues.

1 participant