Skip to content

Commit

Permalink
Merge pull request #604 from OptimalScale/yaoguany-patch-1
Browse files Browse the repository at this point in the history
Update flash_attn2.md
  • Loading branch information
shizhediao authored Aug 8, 2023
2 parents 4e385b5 + 376c062 commit 2482980
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions readme/flash_attn2.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Flash Attention 2.0
# FlashAttention-2
We're thrilled to announce that LMFlow now supports training and inference using **FlashAttention-2**! This cutting-edge feature will take your language modeling to the next level. To use it, simply add ``` --use_flash_attention True ``` to the corresponding bash script.
Here is an example of how to use it:
```
Expand All @@ -15,4 +15,4 @@ deepspeed --master_port=11000 \
--use_flash_attention True
```

Upgrade to LMFlow now and experience the future of language modeling!
Upgrade to LMFlow now and experience the future of language modeling!

0 comments on commit 2482980

Please sign in to comment.