Skip to content

swordlidev/Evaluation-Multimodal-LLMs-Survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 

Repository files navigation

Benchmarks of MLLMs: Survey

A Survey on Benchmarks of Multimodal Large Language Models

1Tencent, 2PKU, 2NUS, 2SEU, 2NJU

⚡We will actively maintain this repository and incorporate new research as it emerges. If you have any questions, please contact swordli@tencent.com. Welcome to collaborate on academic research and writing papers together.

📌 What is This Survey About?

Multimodal Large Language Models (MLLMs) are gaining increasing popularity in both academia and industry due to their remarkable performance in various applications such as visual question answering, visual perception, understanding, and reasoning. Over the past few years, significant efforts have been made to examine MLLMs from multiple perspectives. This paper presents a comprehensive review of 200+ benchmarks and evaluations for MLLMs, focusing on (1)perception and understanding, (2)cognition and reasoning, (3)specific domains, (4)key capabilities, and (5)other modalities. Finally, we discuss the limitations of the current evaluation methods for MLLMs and explore promising future directions. Our key argument is that evaluation should be regarded as a crucial discipline to better support the development of MLLMs.

Summary of 200 MLLM Benchmarks

Perception&Understanding

Comprehensive Evaluation

  1. "Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want". Lin W, Wei X, An R, et al.. arXiv 2024. [Paper] [Github].
  2. "CHEF: A COMPREHENSIVE EVALUATION FRAMEWORK FOR STANDARDIZED ASSESSMENT OF MULTIMODAL LARGE LANGUAGE MODELS". Shi Z, Wang Z, Fan H, et al. arXiv 2023. [paper] [Github].

Fine-grained Perception Image Understanding

Cognition&Reasoning

General Reasoning Knowledge-based Reasoning Intelligence&Cognition

Specific Domains

Text-rich VQA Decision-making Agents Diverse Cultures&Languages Other Applications Long-context

  1. "". **. . [Paper] [Github].
  2. "". **. . [Paper] [Github].
  3. "". **. . [Paper] [Github].
  4. "". **. . [Paper] [Github].
  5. "". **. . [Paper] [Github].
  6. "". **. . [Paper] [Github].
  7. "". **. . [Paper] [Github].
  8. "". **. . [Paper] [Github]. Instruction Following
  9. "". **. . [Paper] [Github].
  10. "". **. . [Paper] [Github].
  11. "". **. . [Paper] [Github].
  12. "". **. . [Paper] [Github].
  13. "". **. . [Paper] [Github].
  14. "". **. . [Paper] [Github].
  15. "". **. . [Paper] [Github].
  16. "". **. . [Paper] [Github].

Key Capabilities

Conversation Abilities

Long-context

  1. Mile-Bench "MileBench: Benchmarking MLLMs in Long Context". Song D, Chen S, Chen G H, et al.. arXiv 2024. [Paper] [Github].
  2. MMNeedle "Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models". Wang H, Shi H, Tan S, et al.. arXiv 2024. [Paper] [Github].
  3. MLVU "MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding". Zhou J, Shu Y, Zhao B, et al.. arXiv 2024. [Paper] [Github]. Instruction Following
  4. CoIN "CoIN: A Benchmark of Continual Instruction tuNing for Multimodel Large Language Model". Chen C, Zhu J, Luo X, et al.. arXiv 2024. [Paper] [Github].
  5. MIA-Bench "MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs". Qian Y, Ye H, Fauconnier J P, et al.. arXiv 2024. [Paper] [Github].
  6. DEMON "Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions". Li J, Pan K, Ge Z, et al.. ICLR 2023. [Paper] [Github].
  7. VisIT-Bench "VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use". Bitton Y, Bansal H, Hessel J, et al.. NeurIPS 2023. [Paper] [Github].

Hallucination

  1. POPE "Evaluating Object Hallucination in Large Vision-Language Models". Li Y, Du Y, Zhou K, et al.. EMNLP 2023. [Paper] [Github].
  2. GAVIE "Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning". Liu F, Lin K, Li L, et al.. ICLR 2023. [Paper] [Github].
  3. HaELM "Evaluation and Analysis of Hallucination in Large Vision-Language Models". Wang J, Zhou Y, Xu G, et al.. arXiv 2023. [Paper] [Github].
  4. M-HalDetect "Detecting and Preventing Hallucinations in Large Vision Language Models". Gunjal A, Yin J, Bas E.. AAAI 2024. [Paper] [Github].
  5. Bingo "Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges". Cui C, Zhou Y, Yang X, et al.. arXiv 2023. [Paper] [Github].
  6. HallusionBench "HALLUSIONBENCH: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models". Guan T, Liu F, Wu X, et al.. CVPR 2024. [Paper] [Github].
  7. VHTest "Visual Hallucinations of Multi-modal Large Language Models". Huang W, Liu H, Guo M, et al.. arXiv 2024. [Paper] [Github].
  8. CorrelationQA "The Instinctive Bias: Spurious Images lead to Hallucination in MLLMs". Han T, Lian Q, Pan R, et al.. arXiv 2024. [Paper] [Github].
  9. CHAIR "Object Hallucination in Image Captioning". Rohrbach A, Hendricks L A, Burns K, et al.. EMNLP 2018. [Paper] [Github].
  10. MHaluBench "Unified Hallucination Detection for Multimodal Large Language Models". Chen X, Wang C, Xue Y, et al.. arXiv 2024. [Paper] [Github].
  11. VideoHallucer "VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models". Wang Y, Wang Y, Zhao D, et al.. arXiv 2024. [Paper] [Github].
  12. MMHAL-BENCH "Aligning Large Multimodal Models with Factually Augmented RLHF". Sun Z, Shen S, Cao S, et al.. arXiv 2023. [Paper] [Github].
  13. AMBER "AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs Hallucination Evaluation". Wang J, Wang Y, Xu G, et al.. arXiv 2023. [Paper] [Github].
  14. MMECeption "GenCeption: Evaluate Multimodal LLMs with Unlabeled Unimodal Data". Cao L, Buchner V, Senane Z, et al.. arXiv 2024. [Paper] [Github].

Trustworthiness

Robustness

  1. MAD-Bench "How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts". Qian Y, Zhang H, Yang Y, et al.. arXiv 2024. [Paper] [Github].
  2. MMR "Seeing Clearly, Answering Incorrectly: A Multimodal Robustness Benchmark for Evaluating MLLMs on Leading Questions". Liu Y, Liang Z, Wang Y, et al.. arXiv 2024. [Paper] [Github].
  3. MM-SpuBench "MM-SpuBench: Towards Better Understanding of Spurious Biases in Multimodal LLMs". Ye W, Zheng G, Ma Y, et al.. arXiv 2024. [Paper] [Github].
  4. MM-SAP "MM-SAP: A Comprehensive Benchmark for Assessing Self-Awareness of Multimodal Large Language Models in Perception". Wang Y, Liao Y, Liu H, et al.. arXiv 2024. [Paper] [Github].
  5. BenchLMM "BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models". Cai R, Song Z, Guan D, et al.. arXiv 2023. [Paper] [Github].
  6. VQAv2-IDK "Visually Dehallucinative Instruction Generation: Know What You Don’t Know". Cha S, Lee J, Lee Y, et al.. ICASSP 2024. [Paper] [Github].

Safety

  1. MMUBench "Single Image Unlearning: Efficient Machine Unlearning in Multimodal Large Language Models". Li J, Wei Q, Zhang C, et al.. arXiv 2024. [Paper] [Github].
  2. JailBreakV-28K "JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks". Luo W, Ma S, Liu X, et al.. arXiv 2024. [Paper] [Github].
  3. MultiTrust "Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study". Zhang Y, Huang Y, Sun Y, et al.. arXiv 2024. [Paper] [Github].
  4. MM-SafetyBench "MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models". Liu X, Zhu Y, Gu J, et al.. ECCV 2024. [Paper] [Github].
  5. SHIELD "SHIELD: An Evaluation Benchmark for Face Spoofing and Forgery Detection with Multimodal Large Language Models". Shi Y, Gao Y, Lai Y, et al.. arXiv 2024. [Paper] [Github].
  6. RTVLM "Red teaming visual language models". Li M, Li L, Yin Y, et al.. arXiv 2024. [Paper] [Github].

Other Modalities

Videos

Temporal Perception

  1. MVBench "MVBench: A Comprehensive Multi-modal Video Understanding Benchmark". Li K, Wang Y, He Y, et al.. CVPR 2024. [Paper] [Github].
  2. TimeIT "Timechat: A time-sensitive multimodal large language model for long video understanding". Ren S, Yao L, Li S, et al.. CVPR 2024. [Paper] [Github].
  3. ViLMA "ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models". Kesen I, Pedrotti A, Dogan M, et al.. ICLR 2024. [Paper] [Github].
  4. VITATECS "VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models". Li S, Li L, Ren S, et al.. arXiv 2023. [Paper] [Github].
  5. TempCompass "TempCompass: Do Video LLMs Really Understand Videos?". Liu Y, Li S, Liu Y, et al.. arXiv 2024. [Paper] [Github].
  6. OSCaR "OSCaR: Object State Captioning and State Change Representation". Nguyen N, Bi J, Vosoughi A, et al.. arXiv 2024. [Paper] [Github].
  7. ADLMCQ "LLAVIDAL: Benchmarking Large Language Vision Models for Daily Activities of Living". Chakraborty R, Sinha A, Reilly D, et al.. arXiv 2024. [Paper] [Github].
  8. Perception Test "Perception Test: A Diagnostic Benchmark for Multimodal Video Models". Patraucean V, Smaira L, Gupta A, et al.. NeurIPS2024. [Paper] [Github].

Long Video Understanding

  1. MovieChat-1k "Moviechat: From dense token to sparse memory for long video understanding". **. . [Paper] [Github].
  2. EgoSchema "EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding". **. . [Paper] [Github].
  3. Event-Bench "Towards Event-oriented Long Video Understanding". **. . [Paper] [Github].
  4. MLVU "MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding". **. . [Paper] [Github].

Comprehensive Evaluation

  1. Video-Bench "Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models". Ning M, Zhu B, Xie Y, et al.. arXiv 2023. [Paper] [Github].
  2. MMBench-Video "MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding". Fang X, Mao K, Duan H, et al.. arXiv 2024. [Paper] [Github].
  3. Video-MME "Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis". Fu C, Dai Y, Luo Y, et al.. arXiv 2024. [Paper] [Github].
  4. AutoEval-Video "AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering". Chen X, Lin Y, Zhang Y, et al.. arXiv 2023. [Paper] [Github].
  5. MMWorld "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos". He X, Feng W, Zheng K, et al.. arXiv 2024. [Paper] [Github].
  6. WorldNet "WorldGPT: Empowering LLM as Multimodal World Model". Ge Z, Huang H, Zhou M, et al.. arXiv 2024. [Paper] [Github].

Audio

  1. Dynamic-SUPERB "Dynamic-superb: Towards a dynamic, collaborative, and comprehensive instruction-tuning benchmark for speech". Huang C, Lu K H, Wang S H, et al.. ICASSP 2024. [Paper] [Github].
  2. MuChoMusic "MuChoMusic: Evaluating Music Understanding in Multimodal Audio-Language Models". Weck B, Manco I, Benetos E, et al.. arXiv 2024. [Paper] [Github].
  3. AIR-Bench "AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension". Yang Q, Xu J, Liu W, et al.. arXiv 2024. [Paper] [Github].

3D Points

  1. ScanQA "ScanQA: 3D Question Answering for Spatial Scene Understanding". Azuma D, Miyanishi T, Kurita S, et al.. CVPR 2022. [Paper] [Github].
  2. ScanReason "ScanReason: Empowering 3D Visual Grounding with Reasoning Capabilities". Zhu C, Wang T, Zhang W, et al.. arXiv 2024. [Paper] [Github].
  3. LAMM "LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark". Yin Z, Wang J, Cao J, et al.. NeurIPS 2024. [Paper] [Github].
  4. SpatialRGPT "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model". Cheng A C, Yin H, Fu Y, et al.. arXiv 2024. [Paper] [Github].
  5. M3DBench "M3DBench: Let’s Instruct Large Models with Multi-modal 3D Prompts". Li M, Chen X, Zhang C, et al.. arXiv 2023. [Paper] [Github].

Omni-modal

  1. MCUB "Model Composition for Multimodal Large Language Models". Chen C, Du Y, Fang Z, et al.. arXiv 2024. [Paper] [Github].
  2. AVQA "AVQA: A Dataset for Audio-Visual Question Answering on Videos". Yang P, Wang X, Duan X, et al.. MM 2022. [paper] [Github].
  3. MusicAVQA "Learning to Answer Questions in Dynamic Audio-Visual Scenarios". Li G, Wei Y, Tian Y, et al.. CVPR 2022. [Paper] [Github].
  4. MMT-Bench "MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI". Ying K, Meng F, Wang J, et al.. arXiv 2024. [paper] [Github].

About

A Survey on Benchmarks of Multimodal Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •