-
Notifications
You must be signed in to change notification settings - Fork 204
feature/ introduce new model - gemini #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feature/ introduce new model - gemini #31
Conversation
Added integration tests and unit tests for the Gemini model.
7532e4a
to
e8dc892
Compare
Thank you so much for the contribution. We are going to start some testing on our end. |
Hey team, thanks! Please let me know if I can help with anything to push that forward. @pgrayy |
waiting for this to be merged 🤓🥳 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delayed response here, I've gotten a chance to take a look at this pull request. While testing locally, it looks like the original package https://pypi.org/project/google-generativeai/
is now deprecated in favor of https://pypi.org/project/google-genai/
. Can you update the dependency introduced in this PR, and rebase it to the latest SDK changes?
class GeminiModel(Model): | ||
"""Google Gemini model provider implementation.""" | ||
|
||
EVENT_TYPES = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are unused, please remove.
"message_stop", | ||
} | ||
|
||
OVERFLOW_MESSAGES = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you share where these are documented?
|
||
client_args = client_args or {} | ||
genai.client.configure(**client_args) | ||
self.model = genai.GenerativeModel(self.config["model_id"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this genai.GenerativeModel
be passed in instead of the model_id? Does it make sense for this to be configurable?
""" | ||
return self.config | ||
|
||
def _format_request_message_content(self, content: ContentBlock) -> dict[str, Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ContentBlock has more types like "ToolUse", "ToolResult", "ReasoningContent", and a few other modes. Should this be updated?
} | ||
|
||
@override | ||
def format_chunk(self, event: dict[str, Any]) -> StreamEvent: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case gemini returns a streamed response following OpenAI's response stream, you can extend our OpenAI interface to help simplify the logic in this implementation:
class OpenAIModel(Model, abc.ABC): |
Description
Introduce support for the new Model - Gemini
Related Issues
[Link to related issues using #issue-number format]
Documentation PR
[Link to related associated PR in the agent-docs repo]
Type of Change
Testing
hatch fmt --linter
hatch fmt --formatter
hatch test --all
Checklist
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.