Skip to content
This repository was archived by the owner on Oct 23, 2025. It is now read-only.

Added reasoning to final response#22

Merged
javidsegura merged 1 commit intomainfrom
chore/sync-remote-and-open-pr2
Sep 2, 2025
Merged

Added reasoning to final response#22
javidsegura merged 1 commit intomainfrom
chore/sync-remote-and-open-pr2

Conversation

@javidsegura
Copy link
Contributor

@javidsegura javidsegura commented Sep 2, 2025

Summary by CodeRabbit

  • New Features

    • Responses now include optional reasoning content when available, with improved visibility in logs.
    • Enhanced handling of structured outputs for more reliable JSON parsing.
  • Chores

    • Updated example configuration to use the google/gemini-2.5-pro model by default (backend unchanged).

@coderabbitai
Copy link

coderabbitai bot commented Sep 2, 2025

Walkthrough

Renamed LLMResponse.final_response to final_text_response and added an optional reasoning field. Updated base provider processing to extract text and reasoning from the message object and parse JSON from the text. Adjusted OpenAI provider to pass the full message. Updated LLMAgent logging. Example config default model switched to google/gemini-2.5-pro.

Changes

Cohort / File(s) Summary of Changes
Examples Config
examples/config.yaml
Changed default model from google/gemini-2.5-flash to google/gemini-2.5-pro; DEFAULT_BACKEND unchanged.
Core Schema: LLMResponse
src/agnostic_agent/utils/core/schemas.py
Renamed field final_responsefinal_text_response; added reasoning: Optional[Any] = None; parsed_response unchanged.
Provider Processing
src/agnostic_agent/llm_backends/providers/base_llm_provider.py, src/agnostic_agent/llm_backends/providers/openai_provider.py
Base: _process_response now reads message.content for final_text_response, extracts optional reasoning, and parses JSON from the text. OpenAI: passes the full message to _process_response instead of message.content.
LLM Strategy Logging
src/agnostic_agent/llm_strategy.py
Logs result.final_text_response instead of final_response; added debug log for result.reasoning; parsed response logging unaffected.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant C as Caller
    participant A as LLMAgent
    participant P as LLM Provider
    participant API as LLM API

    C->>A: prompt(input, response_schema?)
    A->>P: get_model_response(...)
    P->>API: send messages
    API-->>P: message (content + optional reasoning)
    rect rgba(200, 240, 255, 0.3)
      note right of P: Updated processing
      P->>P: extract final_text_response = message.content
      P->>P: reasoning = message.reasoning (optional)
      alt response_schema provided
        P->>P: parse JSON from final_text_response
      else
        P->>P: parsed_response = None
      end
    end
    P-->>A: LLMResponse{final_text_response, parsed_response, reasoning}
    A->>A: log final_text_response and reasoning
    A-->>C: LLMResponse
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I thump my paws at fields of prose,
New “final_text” is how it goes.
A whisper of “reasoning” hops in too,
Providers pass the message through.
Configs tilt toward “pro” delight—
I nibble logs by moonlit night.
Parsley parsed, the JSON’s right. 🥕✨


📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Free

💡 Knowledge Base configuration:

  • Jira integration is disabled
  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e6b778c and 433355f.

📒 Files selected for processing (5)
  • examples/config.yaml (1 hunks)
  • src/agnostic_agent/llm_backends/providers/base_llm_provider.py (1 hunks)
  • src/agnostic_agent/llm_backends/providers/openai_provider.py (1 hunks)
  • src/agnostic_agent/llm_strategy.py (1 hunks)
  • src/agnostic_agent/utils/core/schemas.py (1 hunks)

Note

🎁 Summarized by CodeRabbit Free

The PR author is not assigned a seat. To perform a comprehensive line-by-line review, please assign a seat to the pull request author through the subscription management page by visiting https://app.coderabbit.ai/login.

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Join our Discord community for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@javidsegura javidsegura merged commit 433355f into main Sep 2, 2025
1 of 4 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants