The framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with image understanding, reasoning, and generation simultaneously. We build this repo based on LLaVA.
- [05/03] 🔥 We support LLaMA3-based models! Welcome to try them here.
- [04/15] 🔥 The Hugging Face demo is available. It's a 13B-HD version, welcome to watch and try.
- [03/28] 🔥 Mini-Gemini is coming! We release the paper, demo, code, models, and data!
We provide some selected examples in this section. More examples can be found in our project page. Feel free to try our online demo!
Please follow the instructions below to install the required packages.
NOTE: If you want to use the 2B version, please ensure to install the latest version Transformers (>=4.38.0).
- Clone this repository
git clone https://github.com/dvlab-research/MGM.git
- Install Package
conda create -n mgm python=3.10 -y
conda activate mgm
cd MGM
pip install --upgrade pip # enable PEP 660 support
pip install -e .
- Install additional packages for training cases
pip install ninja
pip install flash-attn --no-build-isolation
The framework is conceptually simple: dual vision encoders are utilized to provide low-resolution visual embedding and high-resolution candidates; patch info mining is proposed to conduct patch-level mining between high-resolution regions and low-resolution visual queries; LLM is utilized to marry text with images for both comprehension and generation at the same time.
We provide all our fully finetuned models on Stage 1 and 2 data:
Model | LR | HR | Base LLM | Vision Encoder | Finetuning Data | Finetuning schedule | Download |
---|---|---|---|---|---|---|---|
MGM-2B | 336 | 768 | Gemma-2B | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-7B | 336 | 768 | Vicuna-7B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-13B | 336 | 768 | Vicuna-13B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-8B | 336 | 768 | LLaMA-3-8B-Instruct | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-8x7B | 336 | 768 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-34B | 336 | 768 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-7B-HD | 672 | 1536 | Vicuna-7B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-13B-HD | 672 | 1536 | Vicuna-13B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-8B-HD | 672 | 1536 | LLaMA-3-8B-Instruct | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-8x7B-HD | 672 | 1536 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
MGM-34B-HD | 672 | 1536 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Instruct | full_ft-1e | ckpt |
Here are the pretrained weights on Stage 1 data only:
Model | LR | HR | Base LLM | Vision Encoder | Pretrain Data | Finetuning schedule | Download |
---|---|---|---|---|---|---|---|
MGM-2B | 336 | 768 | Gemma-2B | CLIP-L | MGM-Pretrain | 1e | ckpt |
MGM-7B | 336 | 768 | Vicuna-7B-v1.5 | CLIP-L | MGM-Pretrain | 1e | ckpt |
MGM-13B | 336 | 768 | Vicuna-13B-v1.5 | CLIP-L | MGM-Pretrain | 1e | ckpt |
MGM-8x7B | 336 | 768 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Pretrain | 1e | ckpt |
MGM-34B | 336 | 768 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Pretrain | 1e | ckpt |
We provide the processed data for the model training. For model pretraining, please download the following the training image-based data and organize them as:
->
means put the data in the local folder.
- LLaVA Images ->
data/MGM-Pretrain/images
,data/MGM-Finetune/llava/LLaVA-Pretrain/images
- ALLaVA Caption ->
data/MGM-Pretrain/ALLaVA-4V
For model finetuning, please download the following the instruction data and organize them as:
->
means put the data in the local folder.
- COCO train2017 ->
data/MGM-Finetune/coco
- GQA ->
data/MGM-Finetune/gqa
- OCR-VQA (we save all files as
.jpg
) ->data/MGM-Finetune/ocr_vqa
- TextVQA (not included for training) ->
data/MGM-Finetune/textvqa
- VisualGenome part1, VisualGenome part2 ->
data/MGM-Finetune/vg
- ShareGPT4V-100K ->
data/MGM-Finetune/sam
,share_textvqa
,wikiart
,web-celebrity
,web-landmark
- LAION GPT4V ->
data/MGM-Finetune/gpt4v-dataset
- ALLaVA Instruction ->
data/MGM-Pretrain/ALLaVA-4V
- DocVQA ->
data/MGM-Finetune/docvqa
- ChartQA ->
data/MGM-Finetune/chartqa
- DVQA ->
data/MGM-Finetune/dvqa
- AI2D ->
data/MGM-Finetune/ai2d
For model evaluation, please follow this link for preparation. We use some extra benchmarks for evaluation. please download the following the training image-based data and organize them as:
->
means put the data in the local folder.
Please put the pretrained data, finetuned data, and eval data in MGM-Pretrain
, MGM-Finetune
, and MGM-Eval
subset following Structure.
For meta info, please download the following files and organize them as in Structure.
Data file name | Size |
---|---|
mgm_pretrain.json | 1.68 G |
mgm_instruction.json | 1.79 G |
mgm_generation_pure_text.json | 0.04 G |
IMPORTANT: mgm_generation_pure_text.json
is a generation-related subset. DO NOT merge it with mgm_instruction.json
as it is already included in it. You may merge this file with your customized LLM/VLM SFT dataset to enable the reasoning generation ability.
We recommend users to download the pretrained weights from the following link CLIP-Vit-L-336, OpenCLIP-ConvNeXt-L, Gemma-2b-it, Vicuna-7b-v1.5, Vicuna-13b-v1.5, Mixtral-8x7B-Instruct-v0.1, and Nous-Hermes-2-Yi-34B , and put them in model_zoo
following Structure.
The folder structure should be organized as follows before training.
MGM
├── mgm
├── scripts
├── work_dirs
│ ├── MGM
│ │ ├── MGM-2B
│ │ ├── ...
├── model_zoo
│ ├── LLM
│ │ ├── gemma
│ │ │ ├── gemma-2b-it
│ │ ├── vicuna
│ │ │ ├── 7B-V1.5
│ │ │ ├── 13B-V1.5
│ │ ├── llama-3
│ │ │ ├── Meta-Llama-3-8B-Instruct
│ │ │ ├── Meta-Llama-3-70B-Instruct
│ │ ├── mixtral
│ │ │ ├── Mixtral-8x7B-Instruct-v0.1
│ │ ├── Nous-Hermes-2-Yi-34B
│ ├── OpenAI
│ │ ├── clip-vit-large-patch14-336
│ │ ├── openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup
├── data
│ ├── MGM-Pretrain
│ │ ├── mgm_pretrain.json
│ │ ├── images
│ │ ├── ALLaVA-4V
│ ├── MGM-Finetune
│ │ ├── mgm_instruction.json
│ │ ├── llava
│ │ ├── coco
│ │ ├── gqa
│ │ ├── ocr_vqa
│ │ ├── textvqa
│ │ ├── vg
│ │ ├── gpt4v-dataset
│ │ ├── sam
│ │ ├── share_textvqa
│ │ ├── wikiart
│ │ ├── web-celebrity
│ │ ├── web-landmark
│ │ ├── ALLaVA-4V
│ │ ├── docvqa
│ │ ├── chartqa
│ │ ├── dvqa
│ │ ├── ai2d
│ ├── MGM-Eval
│ │ ├── MMMU
│ │ ├── MMB
│ │ ├── MathVista
│ │ ├── ...
The training process consists of two stages: (1) feature alignment stage: bridge the vision and language tokens; (2) instruction tuning stage: teach the model to follow multimodal instructions.
Our models are trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the per_device_train_batch_size
and increase the gradient_accumulation_steps
accordingly. Always keep the global batch size the same: per_device_train_batch_size
x gradient_accumulation_steps
x num_gpus
.
Please make sure you download and organize the data following Preparation before training.
NOTE: Please set hostfile
for 2 machine training and hostfile_4
for 4 machine training.
If you want to train and finetune the framework, please run the following command for MGM-7B with image size 336:
bash scripts/llama/train/stage_1_2_full_v7b_336_hr_768.sh
or for MGM-13B with image size 336:
bash scripts/llama/train/stage_1_2_full_v13b_336_hr_768.sh
Because we reuse the pre-trained projecter weights from the MGM-7B, you can directly use the MGM-7B-HD with image size 672 for stage-2 instruction tuning:
bash scripts/llama/train/stage_2_full_v7b_672_hr_1536.sh
Please find more training scripts of gemma
, llama
, mixtral
, and yi
in scripts/
.
We perform evaluation on several image-based benchmarks. Please download the evaluation data following Preparation and organize them as in Structure.
Model | LLM | Res. | Link | TextVQA | MMB | MME | MM-Vet | MMMU_val | MMMU_test | MathVista |
---|---|---|---|---|---|---|---|---|---|---|
MGM-2B | Gemma-2B | 336 | ckpt | 56.2 | 59.8 | 1341/312 | 31.1 | 31.7 | 29.1 | 29.4 |
MGM-7B | Vicuna-7B-v1.5 | 336 | ckpt | 65.2 | 69.3 | 1523/316 | 40.8 | 36.1 | 32.8 | 31.4 |
MGM-13B | Vicuna-13B-v1.5 | 336 | ckpt | 65.9 | 68.5 | 1565/322 | 46.0 | 38.1 | 33.5 | 37.0 |
MGM-8B | LLaMA-3-8B-Instruct | 336 | ckpt | 67.6 | 72.7 | 1606/341 | 47.3 | 38.2 | 36.3 | -- |
MGM-8x7B | Mixtral-8x7B-Instruct-v0.1 | 336 | ckpt | 69.2 | 75.6 | 1639/379 | 45.8 | 41.8 | 37.1 | 41.8 |
MGM-34B | Nous-Hermes-2-Yi-34B | 336 | ckpt | 70.1 | 79.6 | 1666/439 | 53.0 | 48.7 | 43.6 | 38.9 |
MGM-7B-HD | Vicuna-7B-v1.5 | 672 | ckpt | 68.4 | 65.8 | 1546/319 | 41.3 | 36.8 | 32.9 | 32.2 |
MGM-13B-HD | Vicuna-13B-v1.5 | 672 | ckpt | 70.2 | 68.6 | 1597/320 | 50.5 | 37.3 | 35.1 | 37.0 |
MGM-8B-HD | LLaMA-3-8B-Instruct | 672 | ckpt | 71.6 | -- | 1532/357 | -- | 37.0 | -- | -- |
MGM-8x7B-HD | Mixtral-8x7B-Instruct-v0.1 | 672 | ckpt | 71.9 | 74.7 | 1633/356 | 53.5 | 40.0 | 37.0 | 43.1 |
MGM-34B-HD | Nous-Hermes-2-Yi-34B | 672 | ckpt | 74.1 | 80.6 | 1659/482 | 59.3 | 48.0 | 44.9 | 43.3 |
If you want to evaluate the model on image-based benchmarks, please use the scripts in scripts/MODEL_PATH/eval
.
For example, run the following command for TextVQA evaluation with MGM-7B-HD:
bash scripts/llama/eval/textvqa.sh
Please find more evaluation scripts in scripts/MODEL_PATH
.
Chat with images without the need of Gradio interface. It also supports multiple GPUs, 4-bit and 8-bit quantized inference. With 4-bit quantization. Please make sure you have installed diffusers and PaddleOCR (only for better experience with OCR), and try this for image and generation inference:
python -m mgm.serve.cli \
--model-path work_dirs/MGM/MGM-13B-HD \
--image-file <path to your image>
or try this better experience with OCR (make sure you have installed PaddleOCR):
python -m mgm.serve.cli \
--model-path work_dirs/MGM/MGM-13B-HD \
--image-file <path to your image> \
--ocr
or try this for inference with generation (make sure you have installed diffusers):
python -m mgm.serve.cli \
--model-path work_dirs/MGM/MGM-13B-HD \
--image-file <path to your image> \
--gen
You can also try 8bit or even 4bit for efficient inference
python -m mgm.serve.cli \
--model-path work_dirs/MGM/MGM-13B-HD \
--image-file <path to your image> \
--gen
--load-8bit
Here, we adopt the Gradio UI similar to that in LLaVA to provide a user-friendly interface for our models. To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server ONCE.
python -m mgm.serve.controller --host 0.0.0.0 --port 10000
python -m mgm.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload
You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.
This is the actual worker that performs the inference on the GPU. Each worker is responsible for a single model specified in --model-path
.
python -m mgm.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path work_dirs/MGM/MGM-13B-HD
Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
You can launch as many workers as you want, and compare between different models in the same Gradio interface. Please keep the --controller
the same, and modify the --port
and --worker
to a different port number for each worker.
python -m mgm.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path work_dirs/MGM/MGM-34B-HD
If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the --device
flag: --device mps
.
If the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. Our latest code base will automatically try to use multiple GPUs if you have more than one GPU. You can specify which GPUs to use with CUDA_VISIBLE_DEVICES
. Below is an example of running with the first two GPUs.
CUDA_VISIBLE_DEVICES=0,1 python -m mgm.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path work_dirs/MGM/MGM-13B-HD
You can launch the model worker with quantized bits (4-bit, 8-bit), which allows you to run the inference with reduced GPU memory footprint. Note that inference with quantized bits may not be as accurate as the full-precision model. Simply append --load-4bit
or --load-8bit
to the model worker command that you are executing. Below is an example of running with 4-bit quantization.
python -m mgm.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path work_dirs/MGM/MGM-13B-HD --load-4bit
We provide some examples in this section. More examples can be found in our project page.
If you find this repo useful for your research, please consider citing the paper
@article{li2024mgm,
title={Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models},
author={Li, Yanwei and Zhang, Yuechen and Wang, Chengyao and Zhong, Zhisheng and Chen, Yixin and Chu, Ruihang and Liu, Shaoteng and Jia, Jiaya},
journal={arXiv:2403.18814},
year={2023}
}
This project is not affiliated with Google LLC.
We would like to thank the following repos for their great work:
- This work is built upon the LLaVA.
- This work utilizes LLMs from Gemma, Vicuna, Mixtral, and Nous-Hermes.
The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaVA, LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.