Skip to content

Does vLLM support flash attention? #425

Answered by zhyncs
zhaoyang-star asked this question in Q&A
Discussion options

You must be logged in to vote

Flash attention is an important optimizing method but I found no flash attention impls in vLLM code base. So does vLLM support flash attention?

vLLM use xformers's memory_efficient_attention_forward, so it makes indirect use of flash attention.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by zhaoyang-star
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants