Skip to content

Conversation

@akaitsuki-ii
Copy link
Contributor

No description provided.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @akaitsuki-ii, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the attention mechanism's selection logic to correctly handle FlashAttention v3's limitations regarding attention masks. It ensures that FlashAttention v3 is only engaged when compatible, providing robust fallback to other implementations and clearer error/warning messages when an attention mask is present.

Highlights

  • FlashAttention v3 Compatibility: Modified the attention function to ensure FlashAttention v3 (FA3) is only utilized when no attention mask is present, as FA3 does not support attention masks.
  • Improved Fallback Messaging: Enhanced the warning messages to clearly indicate when FlashAttention v3 is not used, either due to an incompatible head dimension or the presence of an attention mask.
  • Strict FlashAttention v3 Usage: Introduced a RuntimeError that is raised if flash_attn_3 is explicitly specified as the attention implementation but an attention mask is provided, preventing unsupported configurations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly adds checks to prevent using flash_attn_3 with an attention mask in the attention function. This is a good fix. However, I've noticed that the long_context_attention function in the same file also has logic for flash_attn_3 but doesn't include these new checks. It accepts an attn_mask parameter which seems to be ignored, and with your changes, this inconsistency becomes more prominent. When flash_attn_3 is used in long_context_attention with a mask, the mask will be silently ignored, which can lead to incorrect results. I strongly recommend applying similar checks to the long_context_attention function to ensure correctness and consistency.

Comment on lines 141 to 146
msg = (
f"head_dim={q.shape[-1]}, but flash_attn_3 only supports head dimension at most {FA3_MAX_HEADDIM}, will use fallback attention implementation"
if not flash_attn3_compatible
else "flash_attn_3 does not support attention mask, will use fallback attention implementation"
)
logger.warning(msg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While the logic here is correct, using a multi-line ternary operator for constructing the warning message can be a bit dense and harm readability. A standard if-else block would make the logic more explicit and easier to follow.

Suggested change
msg = (
f"head_dim={q.shape[-1]}, but flash_attn_3 only supports head dimension at most {FA3_MAX_HEADDIM}, will use fallback attention implementation"
if not flash_attn3_compatible
else "flash_attn_3 does not support attention mask, will use fallback attention implementation"
)
logger.warning(msg)
if not flash_attn3_compatible:
logger.warning(
f"head_dim={q.shape[-1]}, but flash_attn_3 only supports head dimension at most {FA3_MAX_HEADDIM}, will use fallback attention implementation"
)
else:
logger.warning("flash_attn_3 does not support attention mask, will use fallback attention implementation")

Copy link
Contributor

@weiyilwy weiyilwy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accept

Copy link
Contributor

@weiyilwy weiyilwy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accept

@weiyilwy weiyilwy merged commit b74ba8d into main Sep 4, 2025
@weiyilwy weiyilwy deleted the fix/fa3 branch September 4, 2025 10:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants