Skip to content

Latest commit

 

History

History
163 lines (151 loc) · 6.29 KB

Chat-UniVi-13B.md

File metadata and controls

163 lines (151 loc) · 6.29 KB

Chat-UniVi-13B

We releas Chat-UniVi-13B. Our proposed unified visual representation framework greatly reduces the number of visual tokens, so you can train 13B unified image and video understanding models in full parameters directly on 8 A100 GPUs within 3 days.

Main Results

Image understanding

MethodsLLMConversationDetail DescriptionComplex ReasoningAll
Chat-UniVi-7BVicuna-7B84.174.293.784.2
Chat-UniVi-13BVicuna-13B84.179.494.786.1

Video understanding

MethodsLLMCorrectDetailContextTemporalConsistency
Chat-UniVi-7BVicuna-7B57.858.269.257.856.2
Chat-UniVi-13BVicuna-13B59.459.870.558.060.6

ScienceQA

MethodsLLMAverageSubjectContext ModalityGrade
NATSOCLANTXTIMGNOG1-6G7-12
Chat-UniVi-7BVicuna-7B88.7888.5093.0385.9188.5185.9788.1588.8888.60
Chat-UniVi-13BVicuna-13B90.9990.4195.0588.9189.6488.0590.9491.1990.64

VideoQA

MethodsLLMMSRVTT-QAMSVD-QATGIF-QAActivityNet-QA
AccuracyScoreAccuracyScoreAccuracyScoreAccuracyScore
Chat-UniVi-7BVicuna-7B54.63.165.03.660.33.445.83.2
Chat-UniVi-13BVicuna-13B------46.43.6

Hallucination Evaluation (POPE)

Coming soon.

Train the 13B model

All you have to do is replace the base model with the 13B model.

Stage1: Multimodal Pre-training

deepspeed \
--include localhost:0,1,2,3,4,5,6,7 \
--master_port=29602 \
ChatUniVi/train/train_mem.py \
--deepspeed scripts/zero3.json \
--model_name_or_path ${LLM model path} \
--version v1 \
--model_use PRETUNE \
--dataset_use Pretrain \
--vision_tower openai/clip-vit-large-patch14 \
--tune_mm_mlp_adapter True \
--mm_vision_select_layer -2 \
--mm_use_im_start_end False \
--mm_use_im_patch_token False \
--bf16 True \
--output_dir ${stage1 save path} \
--num_train_epochs 1 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 24000 \
--save_total_limit 1 \
--learning_rate 2e-3 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--dataloader_num_workers 4 \
--lazy_preprocess True \
--report_to wandb

Stage2: Joint Instruction Tuning

deepspeed \
--include localhost:0,1,2,3,4,5,6,7 \
--master_port=29601 \
ChatUniVi/train/train_mem.py \
--deepspeed scripts/zero2.json \
--model_name_or_path ${LLM model path} \
--version v1 \
--model_use FINETUNE \
--dataset_use FINETUNE \
--vision_tower openai/clip-vit-large-patch14 \
--pretrain_mm_mlp_adapter ${stage1 save path}/mm_projector.bin \
--mm_vision_select_layer -2 \
--mm_use_im_start_end False \
--mm_use_im_patch_token False \
--bf16 True \
--output_dir ${stage2 save path} \
--num_train_epochs 2 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--dataloader_num_workers 4 \
--lazy_preprocess True \
--report_to wandb