-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[Firebase AI] Add workaround for invalid SafetyRatings in response #14817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a bug in the backend that causes invalid SafetyRating
values to be returned. The changes include adding an unspecified
case to the HarmCategory
, HarmProbability
, and HarmSeverity
enums, updating the SafetyRating
initializer to handle missing category and probability values, and filtering out invalid SafetyRating
values in the Candidate
struct. Additionally, integration tests have been re-enabled for Vertex AI configurations, and new unit tests have been added to verify that invalid safety ratings are ignored. Overall, this is a well-structured solution to the problem.
Summary of Findings
- Handling of unspecified values: The introduction of
unspecified
cases forHarmCategory
,HarmProbability
, andHarmSeverity
and the subsequent handling in theSafetyRating
initializer is a good approach to gracefully handle potentially missing or invalid data from the backend. - Filtering of safety ratings: The filtering of safety ratings within the
Candidate
struct ensures that invalid ratings are not propagated, preventing potential issues in downstream processing. - Test coverage: The re-enabled integration tests and the new unit tests provide good coverage for the changes, ensuring that the workaround functions as expected and does not introduce regressions.
- Documentation: The comments added to explain the workaround and the purpose of the
unspecified
cases enhance the code's readability and maintainability.
Merge Readiness
The pull request appears to be in good shape for merging. The changes address the identified bug effectively, include adequate testing, and are well-documented. I am unable to approve the pull request, and recommend that others review and approve this code before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Added handling for empty
SafetyRating
values in responses. This may occur during image generation usinggemini-2.0-flash-exp
where the backend (Vertex AI only) returns additional emptySafetyRating
objects in thesafetyRatings
array (i.e.,{}, {}, {}, {}
). See FirebaseExtended/vertexai-sdk-test-data#37 for more details.