-
-
Notifications
You must be signed in to change notification settings - Fork 306
Improve AI bug issue generation UX and robustness #5269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Improve AI bug issue generation UX and robustness #5269
Conversation
|
👋 Hi @anuragthippani1! This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:
Once a valid peer review is submitted, this check will pass automatically. Thank you! |
WalkthroughThree files modified to improve error handling in GitHub issue generation: template updated to display errors and pre-fill form fields, view functions enhanced with API key validation and robust JSON response parsing, and unit tests added to cover error scenarios. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
❌ Pre-commit checks failedThe pre-commit hooks found issues that need to be fixed. Please run the following commands locally to fix them: # Install pre-commit if you haven't already
pip install pre-commit
# Run pre-commit on all files
pre-commit run --all-files
# Or run pre-commit on staged files only
pre-commit runAfter running these commands, the pre-commit hooks will automatically fix most issues. 💡 Tip: You can set up pre-commit to run automatically on every commit by running: pre-commit installPre-commit outputFor more information, see the pre-commit documentation. |
| api_key = os.getenv("OPENAI_API_KEY") | ||
| if not api_key or api_key.startswith("sk-proj-"): | ||
| return {"error": "OpenAI API key is not configured"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: The validation logic incorrectly rejects all modern OpenAI API keys because it checks if the key startswith("sk-proj-"), which is the standard prefix for new keys.
Severity: CRITICAL | Confidence: High
🔍 Detailed Analysis
The validation logic for the OpenAI API key is incorrect. The condition if not api_key or api_key.startswith("sk-proj-") will reject all legitimate, modern OpenAI API keys, as they use the sk-proj- prefix. This will cause the AI issue generation feature to fail for any user with a valid key, returning an "OpenAI API key is not configured" error. The likely intent was to reject a specific placeholder default value, but the implementation incorrectly blocks all valid keys matching this common pattern.
💡 Suggested Fix
The validation logic should be changed to correctly identify valid keys. Instead of rejecting keys that start with sk-proj-, the check should ensure the key is present and starts with the broader sk- prefix, for example: if not api_key or not api_key.startswith("sk-"):.
🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: website/views/issue.py#L2202-L2204
Potential issue: The validation logic for the OpenAI API key is incorrect. The condition
`if not api_key or api_key.startswith("sk-proj-")` will reject all legitimate, modern
OpenAI API keys, as they use the `sk-proj-` prefix. This will cause the AI issue
generation feature to fail for any user with a valid key, returning an "OpenAI API key
is not configured" error. The likely intent was to reject a specific placeholder default
value, but the implementation incorrectly blocks all valid keys matching this common
pattern.
Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 7136798
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (3)
website/templates/github_issue_prompt.html(3 hunks)website/tests/test_generate_github_issue.py(1 hunks)website/views/issue.py(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
website/tests/test_generate_github_issue.py (1)
website/views/issue.py (1)
generate_github_issue(2200-2257)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run Tests
- GitHub Check: docker-test
🔇 Additional comments (9)
website/templates/github_issue_prompt.html (3)
38-43: LGTM!The error alert implementation is well-structured with proper conditional rendering, accessibility attributes, and consistent Tailwind styling.
56-56: LGTM!The textarea pre-fill correctly preserves user input when the form is re-rendered after an error.
72-72: LGTM!The input value pre-fill correctly preserves the repository URL when the form is re-rendered after an error.
website/tests/test_generate_github_issue.py (3)
10-27: LGTM!The success test case properly mocks the OpenAI client chain and validates both the result structure and API invocation. The test coverage is comprehensive.
29-42: LGTM!The invalid JSON test case correctly validates error handling when OpenAI returns non-JSON content.
44-48: LGTM!The missing API key test case correctly validates error handling when the environment variable is not configured.
website/views/issue.py (3)
2174-2183: LGTM!The error handling correctly preserves user input and renders the prompt template with appropriate context and HTTP status code.
2206-2207: LGTM!Adding a timeout to the OpenAI client is a good defensive practice that prevents the view from hanging indefinitely.
2231-2239: LGTM!The enhanced parsing correctly handles OpenAI responses that wrap JSON in fenced code blocks. The logic properly strips formatting markers and handles case variations.
| api_key = os.getenv("OPENAI_API_KEY") | ||
| if not api_key or api_key.startswith("sk-proj-"): | ||
| return {"error": "OpenAI API key is not configured"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenAI API key prefixes valid formats sk-proj- project keys
💡 Result:
Short answer — current OpenAI API key prefixes/formats:
- Legacy user secret keys: start with "sk-". [2]
- Project secret keys (current default): start with "sk-proj-". Project keys are managed per project and appear redacted like "sk-abc...def" in responses. [1][4]
Notes: project-related IDs in the API use prefixes like "proj_" and "key_" for project and key identifiers (e.g., proj_abc, key_abc). [1][4]
Sources: [1] OpenAI Project API keys docs; [2] OpenAI Help Center — Incorrect API key; [4] OpenAI Admin API keys.
🏁 Script executed:
# Get context around the code snippet to understand the validation intent
cd /tmp/repo && sed -n '2190,2220p' website/views/issue.py 2>/dev/null || echo "File not found, listing structure..."Repository: OWASP-BLT/BLT
Length of output: 170
🏁 Script executed:
# Search for the file and examine the context
find . -name "issue.py" -path "*/website/views/*" -type fRepository: OWASP-BLT/BLT
Length of output: 81
🏁 Script executed:
sed -n '2195,2220p' ./website/views/issue.pyRepository: OWASP-BLT/BLT
Length of output: 1096
Remove the invalid API key prefix check.
The validation at line 2203 rejects keys starting with sk-proj-, which is the current standard OpenAI API key prefix for project-scoped keys. This check will incorrectly block users with valid project keys. The sk- prefix alone covers both legacy (sk-) and project-scoped (sk-proj-) keys, making the startswith("sk-proj-") check incorrect.
- if not api_key or api_key.startswith("sk-proj-"):
+ if not api_key:
return {"error": "OpenAI API key is not configured"}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| api_key = os.getenv("OPENAI_API_KEY") | |
| if not api_key or api_key.startswith("sk-proj-"): | |
| return {"error": "OpenAI API key is not configured"} | |
| api_key = os.getenv("OPENAI_API_KEY") | |
| if not api_key: | |
| return {"error": "OpenAI API key is not configured"} |
🤖 Prompt for AI Agents
In website/views/issue.py around lines 2202 to 2204, the code rejects API keys
starting with "sk-proj-", which are valid project-scoped OpenAI keys; remove the
invalid api_key.startswith("sk-proj-") check and instead either only verify
presence or, if you want a prefix check, validate with api_key.startswith("sk-")
so both legacy and project keys pass.
|
💬 Reminder: Unresolved Conversations Hi @anuragthippani1! This pull request has 2 unresolved conversations that need to be addressed. Please review and resolve the pending discussions so we can move forward with merging this PR. Thank you! 🙏 |
Adds resilient OpenAI integration (key check, timeout, fenced-JSON cleanup, field validation) for generate_github_issue.
Shows an error banner on the AI prompt page and preserves form inputs when AI fails.
Adds unit tests for success, invalid JSON, and missing API key cases.
Requires OPENAI_API_KEY set (non-placeholder).
Tests: poetry run python manage.py test website.tests.test_generate_github_issue (pass).
Summary by CodeRabbit
Bug Fixes
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.