Skip to content

Commit

Permalink
Update llama_attn_replace_sft.py
Browse files Browse the repository at this point in the history
  • Loading branch information
gianlucamacri authored Nov 2, 2023
1 parent 363c195 commit 0eb1a2a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion llama_attn_replace_sft.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def forward_flashattn(
attention_mask: [bsz, q_len]
"""
if not self.training:
warnings.warn("This function should be used just for training as it may exhibit reduced inference performances. For inference, please use forward_flashattn_inference.")
warnings.warn("This function should be used just for training as it may exhibit reduced inference performance. For inference, please use forward_flashattn_inference.")

if output_attentions:
warnings.warn(
Expand Down

0 comments on commit 0eb1a2a

Please sign in to comment.