Pinned Loading
-
-
TransformerEngine
TransformerEngine PublicForked from NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilizatio…
Python
-
flash-attention
flash-attention PublicForked from Dao-AILab/flash-attention
Fast and memory-efficient exact attention
Python
-
flashinfer
flashinfer PublicForked from flashinfer-ai/flashinfer
FlashInfer: Kernel Library for LLM Serving
Cuda
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.