-
Notifications
You must be signed in to change notification settings - Fork 243
Fix openrouter #220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix openrouter #220
Conversation
|
Warning Rate limit exceeded@maxkahan has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 3 minutes and 51 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
WalkthroughRefactors OpenRouter LLM to accept structured Instructions, injects those instructions as a system message, rebuilds conversation history before responses, updates example to use model-driven personality mapping and new turn-detection params, and removes a small post-turn STT cleanup and micro-delay; other edits are formatting/test adjustments. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–25 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
plugins/openrouter/example/openrouter_example.py (1)
50-54: Consider stripping leading whitespace from the instructions.The multi-line f-string includes indentation whitespace that will be passed to the LLM. While most LLMs handle this gracefully, cleaner formatting is preferable.
- instructions=f""" - Answer in 6 words or less. - {personality} - """, + instructions=f"Answer in 6 words or less. {personality}",Or use
textwrap.dedent()if you prefer the multi-line format for readability.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
plugins/openrouter/example/openrouter_example.py(1 hunks)plugins/openrouter/tests/test_openrouter_llm.py(2 hunks)plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py(1 hunks)pyproject.toml(0 hunks)
💤 Files with no reviewable changes (1)
- pyproject.toml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Never adjust sys.path in Python code
Never writeexcept Exception as e- use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings
Files:
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.pyplugins/openrouter/example/openrouter_example.pyplugins/openrouter/tests/test_openrouter_llm.py
**/*test*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*test*.py: Never mock in tests; use pytest for testing
Mark integration tests with @pytest.mark.integration decorator
@pytest.mark.asyncio is not needed - it is automatic
Files:
plugins/openrouter/tests/test_openrouter_llm.py
🧬 Code graph analysis (2)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
plugins/openai/vision_agents/plugins/openai/openai_llm.py (2)
create_response(112-205)add_conversation_history(108-110)
plugins/openrouter/tests/test_openrouter_llm.py (2)
agents-core/vision_agents/core/instructions.py (1)
Instructions(15-115)agents-core/vision_agents/core/llm/llm.py (1)
set_instructions(179-193)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Mypy
- GitHub Check: unit / Ruff
- GitHub Check: unit / Mypy
- GitHub Check: unit / Ruff
- GitHub Check: unit / Test "not integration"
🔇 Additional comments (4)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
48-77: LGTM! Clean workaround for OpenRouter's API constraints.The override correctly handles the absence of an
instructionsparameter by converting it to a system message. The try/finally block ensures_instructionsis always restored, even if the parent call raises an exception.plugins/openrouter/tests/test_openrouter_llm.py (1)
9-9: LGTM! Test correctly adopts Instructions class.The test now properly uses the
Instructionswrapper for system instructions, aligning with the structured instruction handling introduced in the codebase. The extended prompt description provides clearer context for the expected "NL" response.Also applies to: 146-146, 149-149
plugins/openrouter/example/openrouter_example.py (2)
57-59: Parameter names are correct and align with theSmartTurnDetectionAPI.The parameters
pre_speech_buffer_msandspeech_probability_thresholdmatch the actualSmartTurnDetection.__init__signature. The values (2000ms pre-speech buffer and 0.9 probability threshold) are reasonable and higher than the defaults (200ms and 0.5 respectively).
33-45: Code is correct; model identifier is valid.The
"google/gemini-2.5-flash"model is confirmed to exist in the OpenRouter API. The personality mapping logic on lines 33–45 is clear, readable, and follows Python conventions with no coding guideline violations. No changes required.
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
Outdated
Show resolved
Hide resolved
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
90-113: Fix conversation-history role corruption and removehasattrper guidelinesWhen rebuilding
messagesfromself._conversation.messages, the fallback branch still hardcodes"role": "user"for non-dict originals. This mislabels assistant/system turns as user messages and corrupts context. The checks also continue to usehasattr, which conflicts with the project’s Python guidelines.You can both preserve roles and avoid
hasattrby caching_conversationvia normal attribute access in atry/exceptand extractingm.rolewhen available:- # Build the message list - messages = [] - - if hasattr(self, "_conversation") and self._conversation: - # Extract serializable message content from conversation history - for m in self._conversation.messages: - if isinstance(m.original, dict): - messages.append(m.original) - else: - # For non-dict originals, use the normalized content - messages.append( - {"content": m.content, "role": "user", "type": "message"} - ) + # Build the message list + messages = [] + + conversation = None + try: + conversation = self._conversation # type: ignore[attr-defined] + except AttributeError: + pass + + if conversation: + # Extract serializable message content from conversation history + for m in conversation.messages: + if isinstance(m.original, dict): + messages.append(m.original) + else: + # For non-dict originals, preserve the role when available + role = "user" + try: + role = m.role # type: ignore[attr-defined] + except AttributeError: + pass + messages.append( + {"content": m.content, "role": role, "type": "message"} + ) @@ - # Add new messages - messages.extend(new_messages) - kwargs["input"] = messages - - # Add messages to conversation - if hasattr(self, "_conversation") and self._conversation: - normalized_messages = self._normalize_message(new_messages) - for msg in normalized_messages: - self._conversation.messages.append(msg) + # Add new messages + messages.extend(new_messages) + kwargs["input"] = messages + + # Add messages to conversation + if conversation: + normalized_messages = self._normalize_message(new_messages) + for msg in normalized_messages: + conversation.messages.append(msg)This keeps history roles accurate and removes
hasattrusage in favor of normal attribute access withAttributeErrorhandling, in line with the coding guidelines.
🧹 Nitpick comments (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
19-19: _instructions should allow None to match usage and avoid type ignores
_instructionsis annotated asstrbut later set toNoneincreate_response, requiring# type: ignore[assignment]. Make the type explicitly nullable and drop the ignore to better reflect actual usage.- _instructions: str + _instructions: str | None ... - # Temporarily clear instructions so parent doesn't add it - self._instructions, original = None, self._instructions # type: ignore[assignment] + # Temporarily clear instructions so parent doesn't add it + self._instructions, original = None, self._instructions
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Never adjust sys.path in Python code
Never writeexcept Exception as e- use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings
Files:
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
🧬 Code graph analysis (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
plugins/openai/vision_agents/plugins/openai/openai_llm.py (2)
create_response(112-205)add_conversation_history(108-110)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Mypy
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Mypy
🔇 Additional comments (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
50-82: Confirm_instructionsis a rendered string before injecting as system contentThis override assumes
self._instructionscan be placed directly in{"content": self._instructions, ...}. With the move to structuredInstructions, if_instructionsis an object (not a plain string), this may break JSON serialization or inject an unintended representation.Please verify that by the time
create_responseruns,_instructionsis already rendered to a string suitable as system content (e.g., via a render/format method inLLM.set_instructions). If not, add the appropriate conversion here before building the system message.
Fixes instruction following issue
Summary by CodeRabbit
New Features
Configuration Changes
Improvements
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.