Skip to content

Deep-Spark/FlashMLA

 
 

Repository files navigation

FlashMLA on Iluvatar CoreX

Here is the implementation of FlashMLA base on Iluvatar Corex Toolkit and Iluvatar Corex chips.

Quick start

Install

bash clean_flashmla.sh
bash build_flashmla.sh
bash install_flashmla.sh

Benchmark

python tests/test_flash_mla.py

Usage

from flash_mla import get_mla_metadata, flash_mla_with_kvcache

tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)

for i in range(num_layers):
    ...
    o_i, lse_i = flash_mla_with_kvcache(
        q_i, kvcache_i, block_table, cache_seqlens, dv,
        tile_scheduler_metadata, num_splits, causal=True,
    )
    ...

Requirements

  • Iluvatar CoreX GPUs
  • Iluvatar CoreX Toolkit
  • PyTorch 2.0 and above

Acknowledgement

FlashMLA is inspired by FlashAttention 2&3 and cutlass projects.

Citation

@misc{flashmla2025,
      title={FlashMLA: Efficient MLA decoding kernels},
      author={Jiashi Li},
      year={2025},
      publisher = {GitHub},
      howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 67.7%
  • Python 28.5%
  • Cuda 3.8%