-
Notifications
You must be signed in to change notification settings - Fork 603
feat: Add anthropic and google-genai adapters #9993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| client = genai.Client(**kwargs) | ||
| return client.aio if is_async else client | ||
| except ImportError: | ||
| raise ImportError("Google GenAI package not installed. Run: pip install google-genai") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Model Parameter Ignored in Client Factory
The create_google_genai_client factory accepts a model parameter but doesn't use or store it. This causes the GoogleGenAIAdapter to default to 'gemini-2.0-flash-exp' instead of the specified model, inconsistent with other adapters that preserve model information.
| def _check_if_async_client(self) -> bool: | ||
| if hasattr(self.client, "aio"): | ||
| return False | ||
| return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Client Type Detection Fails
The _check_if_async_client method incorrectly determines if the Google GenAI client is synchronous or asynchronous. It relies on the presence of the .aio attribute, which may not reliably distinguish between client types. This misidentification can lead to ValueError exceptions when attempting to use the wrong client method (sync vs. async).
| if hasattr(response.content[0], "text"): | ||
| return cast(str, response.content[0].text) | ||
| else: | ||
| raise ValueError("Anthropic returned unexpected content format") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could it be useful to add the response body to this error message so users know what is being returned?
| messages=messages, | ||
| tools=[tool_definition], | ||
| tool_choice={"type": "tool", "name": "extract_structured_data"}, | ||
| max_tokens=4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't max_tokens be one of the configurable kwargs?
|
|
||
| def _schema_to_tool(self, schema: Dict[str, Any]) -> Dict[str, Any]: | ||
| description = schema.get( | ||
| "description", "Extract structured data according to the provided schema" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nit: the task is more like "Respond in a format matching the provided schema" than it is about extracting structured data. This may not impact things much, but LLMs are sensitive to wording so 🤷🏼♀️
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| class GoogleGenAIRateLimitError(Exception): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no google version of a rate limit error? What about an error code?
|
|
||
| def _supports_tool_calls(self) -> bool: | ||
| model_name = self.model_name.lower() | ||
| if "gemini-1.5" in model_name or "gemini-2" in model_name: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we are gonna hard code this for now?
| return prompt | ||
|
|
||
| text_parts: list[str] = [] | ||
| for part in prompt.parts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for now the solution is just to collapse all the message content into one prompt string without roles, yeah? We need to figure out how to handle this when we move to a new prompt abstraction
Adds LLM adapters for calling LLMs via the Anthropic SDK and google-genai SDK
Note
Adds Anthropic and Google GenAI LLM adapters (sync/async text and object generation) with client factories and updates registries to use package metadata for dependency checks; fixes LangChain dependency names.
anthropic:AnthropicAdapterwith client identification, sync/asyncgenerate_text, and object generation via tool-calling; schema validation and default model; rate limit errors via SDK.create_anthropic_clientwithAnthropicClientWrapper.google-genai:GoogleGenAIAdapterwith client identification, sync/asyncgenerate_text, object generation via structured output or tool-calling; schema validation and custom rate-limit error handling.create_google_genai_client.phoenix/evals/llm/registries.py):importlib.metadata.version; handlePackageNotFoundError; dependency coloring updated accordingly; docstring clarifies pip package names.adapters/__init__.py):AnthropicAdapterandGoogleGenAIAdapter.langchain-openaiandlangchain-anthropic.Written by Cursor Bugbot for commit 4e563c2. This will update automatically on new commits. Configure here.