Skip to content

FlashAttention for prefix language modeling written in Triton

Notifications You must be signed in to change notification settings

adamcasson/triton-prefix-lm-flash-attn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

triton-prefix-lm-flash-attn

FlashAttention for prefix language modeling written in Triton. This kernel is implemented identically to causal FlashAttention but applies bidirectional attention in the prefix region of the sequence as illustrated below.

prefixlm image source: PrefixLM4MT (ICML 2022), Biao Zhang et al.

Only been tested on sequence lengths and prefix lengths that are divisible by 64. Prefix and full sequence length is assumed to be constant in the batch.

Requires:

  • trition-nightly

Hardware tested:

  • H100
  • A100
  • A10

About

FlashAttention for prefix language modeling written in Triton

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages