⚽
Attention is all we need
Highlights
- Pro
Pinned Loading
-
mit-han-lab/streaming-llm
mit-han-lab/streaming-llm Public[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
-
mit-han-lab/smoothquant
mit-han-lab/smoothquant Public[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
-
mit-han-lab/duo-attention
mit-han-lab/duo-attention PublicDuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
-
mit-han-lab/fastcomposer
mit-han-lab/fastcomposer Public[IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
-
mit-han-lab/offsite-tuning
mit-han-lab/offsite-tuning PublicOffsite-Tuning: Transfer Learning without Full Model
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.