Skip to content
View Guangxuan-Xiao's full-sized avatar
Attention is all we need
Attention is all we need

Highlights

  • Pro

Block or report Guangxuan-Xiao

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Guangxuan-Xiao/README.md

Hi there 👋

Pinned Loading

  1. mit-han-lab/streaming-llm mit-han-lab/streaming-llm Public

    [ICLR 2024] Efficient Streaming Language Models with Attention Sinks

    Python 6.6k 363

  2. mit-han-lab/smoothquant mit-han-lab/smoothquant Public

    [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

    Python 1.2k 144

  3. mit-han-lab/duo-attention mit-han-lab/duo-attention Public

    DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads

    Python 330 14

  4. mit-han-lab/fastcomposer mit-han-lab/fastcomposer Public

    [IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

    Python 658 37

  5. mit-han-lab/offsite-tuning mit-han-lab/offsite-tuning Public

    Offsite-Tuning: Transfer Learning without Full Model

    Python 367 38

  6. torch-int torch-int Public

    This repository contains integer operators on GPUs for PyTorch.

    Python 180 50