Skip to content

[Firebase AI] Add workaround for invalid SafetyRatings in response #14817

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
May 5, 2025

Conversation

andrewheard
Copy link
Contributor

@andrewheard andrewheard commented May 5, 2025

Added handling for empty SafetyRating values in responses. This may occur during image generation using gemini-2.0-flash-exp where the backend (Vertex AI only) returns additional empty SafetyRating objects in the safetyRatings array (i.e., {}, {}, {}, {}). See FirebaseExtended/vertexai-sdk-test-data#37 for more details.

@andrewheard andrewheard added this to the 11.13.0 - M164 milestone May 5, 2025
Copy link
Contributor

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the backend that causes invalid SafetyRating values to be returned. The changes include adding an unspecified case to the HarmCategory, HarmProbability, and HarmSeverity enums, updating the SafetyRating initializer to handle missing category and probability values, and filtering out invalid SafetyRating values in the Candidate struct. Additionally, integration tests have been re-enabled for Vertex AI configurations, and new unit tests have been added to verify that invalid safety ratings are ignored. Overall, this is a well-structured solution to the problem.

Summary of Findings

  • Handling of unspecified values: The introduction of unspecified cases for HarmCategory, HarmProbability, and HarmSeverity and the subsequent handling in the SafetyRating initializer is a good approach to gracefully handle potentially missing or invalid data from the backend.
  • Filtering of safety ratings: The filtering of safety ratings within the Candidate struct ensures that invalid ratings are not propagated, preventing potential issues in downstream processing.
  • Test coverage: The re-enabled integration tests and the new unit tests provide good coverage for the changes, ensuring that the workaround functions as expected and does not introduce regressions.
  • Documentation: The comments added to explain the workaround and the purpose of the unspecified cases enhance the code's readability and maintainability.

Merge Readiness

The pull request appears to be in good shape for merging. The changes address the identified bug effectively, include adequate testing, and are well-documented. I am unable to approve the pull request, and recommend that others review and approve this code before merging.

@andrewheard andrewheard marked this pull request as ready for review May 5, 2025 23:03
@andrewheard andrewheard requested a review from paulb777 May 5, 2025 23:03
Copy link
Member

@paulb777 paulb777 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@andrewheard andrewheard merged commit 08d0f44 into main May 5, 2025
48 checks passed
@andrewheard andrewheard deleted the ah/firebaseai-safetyrating-bug branch May 5, 2025 23:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants