Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
-
Updated
Jun 3, 2025 - Python
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Agent S: an open agentic framework that uses computers like a human
Mobile-Agent: The Powerful Mobile Device Operation Assistant Family
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model
[CVPR'25] Official Implementations for Paper - MagicQuill: An Intelligent Interactive Image Editing System
SpatialLM: Training Large Language Models for Structured Indoor Modeling
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions
mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos
[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"
OpenEMMA, a permissively licensed open source "reproduction" of Waymo’s EMMA model.
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models
[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization
NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing
R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization
This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral
Add a description, image, and links to the mllm topic page so that developers can more easily learn about it.
To associate your repository with the mllm topic, visit your repo's landing page and select "manage topics."