Stars
[ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'
Code for "In-Context Former: Lightning-fast Compressing Context for Large Language Model" (Findings of EMNLP 2024)
Code for "Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs" (ICLR 2025)
Efficient Dictionary Learning with Switch Sparse Autoencoders (SAEs)
Code for "Finding and Editing Multi-Modal Neurons in Pre-Trained Transformers" (Findings of ACL 2024)
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
USTC iCourse - a popular course rating platform for USTC students
Source code for EMNLP2022 paper "Finding Skill Neurons in Pre-trained Transformers via Prompt Tuning".