Skip to content

Replace FlashAttention with xformers #70

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
May 5, 2023
Merged

Replace FlashAttention with xformers #70

merged 12 commits into from
May 5, 2023

Conversation

WoosukKwon
Copy link
Collaborator

This PR replaces FlashAttention with xformers.

Pros:

  • Richer features & higher compatibility. xformers supports attention bias, FP32, head size 256, and old GPUs (such as V100) while FlashAttention does not.
  • xformers provides pre-compiled python wheels, while FlashAttention compiles the entire CUDA code during installation.
  • Future-proof, as the repository is maintained by many developers from Meta.

Cons:

  • xformers can be slower than FlashAttention for small inputs, because it incurs higher CPU overheads.
  • xformers internally creates a new tensor for the attention output. In our case, this leads to an extra copy overhead, because we concatenate the outputs of the two attention ops.

@WoosukKwon WoosukKwon requested a review from zhuohan123 May 4, 2023 10:31
pip install sentencepiece # Required for LlamaTokenizer.
pip install ninja # To parallelize the compilation of flash-attn.
pip install flash-attn # This may take up to 10 mins.
pip install ninja psutil numpy sentencepiece ray torch transformers xformers
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO (in the next PR): specify the exact dependencies in setup.py.

@zhisbug
Copy link
Collaborator

zhisbug commented May 4, 2023

is the memory footprint same with flashattention?

@zhisbug
Copy link
Collaborator

zhisbug commented May 5, 2023

I did a test myself and found the memory saving is almost the same.

@WoosukKwon
Copy link
Collaborator Author

It seems the memory usage is comparable to FlashAttention's. @zhuohan123 Please review the PR.

Copy link
Member

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks!

@@ -213,7 +213,7 @@ def add_server_arguments(parser: argparse.ArgumentParser):
parser.add_argument('--use-np-cache', action='store_true',
help='save a numpy copy of model weights for faster loading')
parser.add_argument('--use-dummy-weights', action='store_true', help='use dummy values for model weights')
# NOTE(woosuk): FlashAttention does not support float32.
# TODO(woosuk): Support FP32 for debugging.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does xformers support FP32?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it does. It is our attention kernel that does not support FP32. More precisely, our attention kernel currently does not support some block sizes when FP32 is used. I will fix this in the future.

@WoosukKwon WoosukKwon mentioned this pull request May 5, 2023
@WoosukKwon WoosukKwon merged commit c9d5b6d into main May 5, 2023
@WoosukKwon WoosukKwon deleted the xformers branch May 5, 2023 09:01
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
yukavio pushed a commit to yukavio/vllm that referenced this pull request Jul 3, 2024
SUMMARY:
for Apache 4(b) -- "You must cause any modified files to carry prominent
notices stating that You changed the files"
https://www.apache.org/licenses/LICENSE-2.0 

TEST PLAN:
GHA
dllehr-amd pushed a commit to dllehr-amd/vllm that referenced this pull request Jul 22, 2024
* Enabling some basic tests for ROCm 6.2

Use strict xfail for ROCm 6.2 test repairs

* Use lenient xfail instead

---------

Co-authored-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
wuhuikx pushed a commit to wuhuikx/vllm that referenced this pull request Mar 27, 2025
Check and update the feature support table.

- both multi-step and speculative decoding require adaptation of corresponding workers
- prompt adapter (finetune method) require adaption in worker.py and model_runner.py

Signed-off-by: MengqingCao <cmq0113@163.com>
robertgshaw2-redhat added a commit to robertgshaw2-redhat/vllm that referenced this pull request May 6, 2025
* [Update] LMcache connector v1 implementation

Signed-off-by: ApostaC <yihua98@uchicago.edu>

* [Add] examples for disaggregated prefill

Signed-off-by: ApostaC <yihua98@uchicago.edu>

* [add] extra information about evns

Signed-off-by: ApostaC <yihua98@uchicago.edu>

* Initial stubs for P/D scheduling changes

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Updates

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Rs branch (#3)

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Rs branch (#5)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Remove Unneeded Arguments (#7)

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* stash

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* cleanup

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

---------

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Improve disagg-example.sh (#8)

- fix spelling
- CUDA_VISIBLE_DEVICES should be set externally

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* added connector

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* update

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* remove

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* seems to load properly

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Revert "updated"

This reverts commit 97316d9.

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* stash

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* added

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* diffs for local dev on macos

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* update

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updaed

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* Checkpoint.

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* Cleanup

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* WIP

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated on scheduler side

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* Hacking away

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* cleanup

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* ensure request removed from running list

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* Runs E2E. Garbage output. Crashes on 2nd request

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* update

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* rename files

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* updated

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* update

Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>

* Second request no longer crashes

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Remove gpu_model_runner hacks

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Clean up Justfile

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* update

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* justfile edits

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Update

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Fixes - lm_eval gsm8k has correctness

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* "just delete the assert"

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* fixup precommit issues

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Fixes

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* updated (#12)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Add Accuracy Test (#13)

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

---------

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Preemption Bugfixes (#15)

* stash fixed double free issue

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* fixed issue

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatrd

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatrd

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatrd

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatrd

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatrd

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatrd

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

---------

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated (#16)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Fix Bad Merge | Fix Memory Leak in Upstream (vllm-project#18)

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* fix merge

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

---------

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* cleanup code

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* cleanup code

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* stash

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updatted

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* revert

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* more spurious changes

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* updated

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* Support MLA in NIXL connector

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* WIP adding tests

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* wip

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Fixes

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

---------

Signed-off-by: ApostaC <yihua98@uchicago.edu>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: ApostaC <yihua98@uchicago.edu>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants