Conversation
WalkthroughRenamed LLMResponse.final_response to final_text_response and added an optional reasoning field. Updated base provider processing to extract text and reasoning from the message object and parse JSON from the text. Adjusted OpenAI provider to pass the full message. Updated LLMAgent logging. Example config default model switched to google/gemini-2.5-pro. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Caller
participant A as LLMAgent
participant P as LLM Provider
participant API as LLM API
C->>A: prompt(input, response_schema?)
A->>P: get_model_response(...)
P->>API: send messages
API-->>P: message (content + optional reasoning)
rect rgba(200, 240, 255, 0.3)
note right of P: Updated processing
P->>P: extract final_text_response = message.content
P->>P: reasoning = message.reasoning (optional)
alt response_schema provided
P->>P: parse JSON from final_text_response
else
P->>P: parsed_response = None
end
end
P-->>A: LLMResponse{final_text_response, parsed_response, reasoning}
A->>A: log final_text_response and reasoning
A-->>C: LLMResponse
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Free 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (5)
Note 🎁 Summarized by CodeRabbit FreeThe PR author is not assigned a seat. To perform a comprehensive line-by-line review, please assign a seat to the pull request author through the subscription management page by visiting https://app.coderabbit.ai/login. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Join our Discord community for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Summary by CodeRabbit
New Features
Chores