You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added native support for Gemini models generation completion signals in LangchainLLMWrapper class / is_finished method (#1727)
# Add Gemini model completion detection support
## Description
This PR adds proper completion detection support for Google's Vertex AI
Gemini models in the LangchainLLMWrapper class. Currently, Ragas
systematically raises `LLMDidNotFinishException` with Gemini models
because it doesn't correctly interpret Gemini's completion signals.
## Problem
The current implementation in `LangchainLLMWrapper` doesn't properly
handle Gemini's completion signals:
- Gemini uses "STOP" and "MAX_TOKENS" as valid completion reasons
- The completion status can be found in either generation_info or
response_metadata
- The current logic doesn't account for these Gemini-specific patterns
## Solution
1. Modified is_finished class to support completion detection for Gemini
models. Added proper handling of Gemini's completion signals
2.
0 commit comments