Welcome to Awesome Vision-Language Compositionality, an extensively curated collection of research papers and resources on compositional understanding in vision-language models (VLMs). This repository will serve as a comprehensive resource to keep up to date with the latest advancements and to provide an overarching view of the vision-language compositionality landscape.
We'd welcome contributions and feedback to continuously improve and expand this collection. π
How to contribute?
π Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality, EMNLP 2024. π
[Paper] [Project Page] [Code]
TL;DR; We present a new fine-tuning framework to increase compositional reasoning of CLIP without sacrificing the multi-modal capabilities.
π Exploring the Spectrum of Visio-Linguistic Compositionality and Recognition, CVPRW 2024. π
[Paper] [Code]
TL;DR; We comprehensively curate VLMs and benchmarks for compositionality and recognition evaluation!
- Compositionality in Image-Text Understanding
- Compositionality in Video-Text Understanding
- Compositionality in Text to Image Generation
- ποΈ Dataset: New benchmarks or datasets for evaluating compositionality.
- π€ Model: New architectures or training methodologies for enhanced compositional understanding.
- βοΈ Evaluation: Assessment metrics and benchmarks for compositional reasoning.
ποΈπ€ Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs. (CV-Bench). [NeurIPS, 2024].
Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, Austin Wang, Rob Fergus, Yann LeCun, Saining Xie.
[Paper] [Code] [HF Dataset]
ποΈ NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples. (NaturalBench). [NeurIPS, 2024].
Baiqi Li, Zhiqiu Lin, Wenxuan Peng, Jean de Dieu Nyandwi, Daniel Jiang, Zixian Ma, Simran Khanuja, Ranjay Krishna, Graham Neubig, Deva Ramanan.
[Paper] [Code] [HF Dataset]
ποΈπ€ TripletCLIP: Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives. (TripletCLIP). [NeurIPS, 2024].
Maitreya Patel, Abhiram Kusumba, Sheng Cheng, Changhoon Kim, Tejas Gokhale, Chitta Baral, Yezhou Yang.
[Paper] [Code] [HF Dataset]
ποΈ ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs. (ConMe). [NeurIPS, 2024].
Irene Huang, Wei Lin, M. Jehanzeb Mirza, Jacob A. Hansen, Sivan Doveh, Victor Ion Butoi, Roei Herzig, Assaf Arbelle, Hilde Kuhene, Trevor Darrel, Chuang Gan, Aude Oliva, Rogerio Feris, Leonid Karlinsky.
[Paper] [Code] [HF Dataset]
ποΈπ€ VisMin: Visual Minimal-Change Understanding. (VisMin). [NeurIPS, 2024].
Rabiul Awal, Saba Ahmadi, Le Zhang, Aishwarya Agrawal.
[Paper] [HF Dataset]
ποΈ BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval. (BiVLC). [NeurIPS, 2024].
Imanol Miranda, Ander Salaberria, Eneko Agirre, Gorka Azkune.
[Paper] [Code] [HF Dataset]
π€ Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality. (FSC-CLIP). [EMNLP, 2024].
Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, In So Kweon, Junmo Kim.
[Paper] [Code]
π€ Distilling Knowledge from Text-to-Image Generative Models Improves Visio-Linguistic Reasoning in CLIP. (SDS-CLIP). [EMNLP, 2024].
Samyadeep Basu, Shell Xu Hu, Maziar Sanjabi, Daniela Massiceti, Soheil Feizi.
[Paper]
π€ Natural Language Inference Improves Compositionality in Vision-Language Models. (CECE). [arXiv, 2024].
Paola Cascante-Bonilla, Yu Hou, Yang Trista Cao, Hal DaumΓ© III, Rachel Rudinger.
[Paper] [Code]
π€ Locality Alignment Improves Vision-Language Models. [arXiv, 2024].
Ian Covert, Tony Sun, James Zou, Tatsunori Hashimoto.
[Paper] [Code]
ποΈ VL-GLUE: A Suite of Fundamental yet Challenging Visuo-Linguistic Reasoning Tasks. (VL-GLUE). [arXiv, 2024].
Shailaja Keyur Sampat, Mutsumi Nakamura, Shankar Kailas, Kartik Aggarwal, Mandy Zhou, Yezhou Yang, Chitta Baral.
[Paper] [Code]
ποΈ MMCOMPOSITION: Revisiting the Compositionality of Pre-trained Vision-Language Models. (MMComposition). [arXiv, 2024].
Hang Hua, Yunlong Tang, Ziyun Zeng, Liangliang Cao, Zhengyuan Yang, Hangfeng He, Chenliang Xu, Jiebo Luo.
[Paper] [Code]
π€ Compositional Entailment Learning for Hyperbolic Vision-Language Models. (HyCoCLIP). [arXiv, 2024].
Avik Pal, Max van Spengler, Guido Maria D'Amely di Melendugno, Alessandro Flaborea, Fabio Galasso, Pascal Mettes.
[Paper]
ποΈπ€ The Hard Positive Truth about Vision-Language Compositionality. (HP+HN). [ECCV, 2024].
Amita Kamath, Cheng-Yu Hsieh, Kai-Wei Chang, Ranjay Krishna.
[Paper] [Code]
ποΈ Mismatch Quest: Visual and Textual Feedback for Image-Text Misalignment. (MismatchQuest). [ECCV, 2024].
Brian Gordon, Yonatan Bitton, Yonatan Shafir, Roopal Garg, Xi Chen, Dani Lischinski, Daniel Cohen-Or, Idan Szpektor.
[Paper] [Code] [HF Dataset]
βοΈ Removing Distributional Discrepancies in Captions Improves Image-Text Alignment. (LLaVA-score). [ECCV, 2024].
Yuheng Li, Haotian Liu, Mu Cai, Yijun Li, Eli Shechtman, Zhe Lin, Yong Jae Lee, Krishna Kumar Singh.
[Paper] [Code]
ποΈπ€ Evaluating Text-to-Visual Generation with Image-to-Text Generation. (VQAScore). [ECCV, 2024].
Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, Deva Ramanan.
[Paper] [Code] [Model] [HF Dataset]
ποΈβοΈ FINEMATCH: Aspect-based Fine-grained Image and Text Mismatch Detection and Correction. (FINEMATCH). [ECCV, 2024].
Hang Hua, Jing Shi, Kushal Kafle, Simon Jenni, Daoan Zhang, John Collomosse, Scott Cohen, Jiebo Luo.
[Paper] [Code]
ποΈ Detect, Describe, Discriminate: Moving Beyond VQA for MLLM Evaluation. (D3). [ECCVW, 2024].
Manu Gaur, Darshan Singh S, Makarand Tapaswi.
[Paper] [Code]
ποΈ ColorSwap: A Color and Word Order Dataset for Multimodal Evaluation. (ColorSwap). [ACL Findings, 2024].
Jirayu Burapacheep, Ishan Gaur, Agam Bhatia, Tristan Thrush.
[Paper] [Code] [HF Dataset]
βοΈ An Examination of the Compositionality of Large Generative Vision-Language Models. (SADE). [NAACL, 2024].
Teli Ma, Rong Li, Junwei Liang.
[Paper] [Code]
βοΈ Diagnosing the Compositional Knowledge of Vision Language Models from a Game-Theoretic View. [ICML, 2024].
Jin Wang, Shichao Dong, Yapeng Zhu, Kelu Yao, Weidong Zhao, Chao Li, Ping Luo.
[Paper] [Code]
π€ Revisiting the Role of Language Priors in Vision-Language Models. (VisualGPTScore). [ICML, 2024].
Zhiqiu Lin, Xinyue Chen, Deepak Pathak, Pengchuan Zhang, Deva Ramanan.
[Paper] [Code]
βοΈ Exploring the Spectrum of Visio-Linguistic Compositionality and Recognition. [CVPRW, 2024].
Youngtaek Oh, Pyunghwan Ahn, Jinhyung Kim, Gwangmo Song, Soonyoung Lee, In So Kweon, Junmo Kim.
[Paper] [Code]
ποΈπ€ Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs. (MMVP). [CVPR, 2024].
Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, Saining Xie.
[Paper] [Code] [HF Dataset]
π€ Compositional Chain-of-Thought Prompting for Large Multimodal Models. (CCoT). [CVPR, 2024].
Chancharik Mitra, Brandon Huang, Trevor Darrell, Roei Herzig.
[Paper] [Code]
ποΈ A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions. (DCI). [CVPR, 2024].
Jack Urbanek, Florian Bordes, Pietro Astolfi, Mary Williamson, Vasu Sharma, Adriana Romero-Soriano.
[Paper] [Code]
ποΈπ€ Synthesize, Diagnose, and Optimize: Towards Fine-Grained Vision-Language Understanding. (SPEC). [CVPR, 2024].
Wujian Peng, Sicheng Xie, Zuyao You, Shiyi Lan, Zuxuan Wu.
[Paper] [Code] [HF Dataset]
π€ Iterated Learning Improves Compositionality in Large Vision-Language Models. (IL-CLIP). [CVPR, 2024].
Chenhao Zheng, Jieyu Zhang, Aniruddha Kembhavi, Ranjay Krishna.
[Paper]
π€ Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional Understanding. (CE-CLIP). [CVPR, 2024].
Le Zhang, Rabiul Awal, Aishwarya Agrawal.
[Paper] [Code]
π€ MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training. (MobileCLIP). [CVPR, 2024].
Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
[Paper] [Code] [HF Dataset]
π€ Efficient Vision-Language Pre-training by Cluster Masking. [CVPR, 2024].
Zihao Wei, Zixuan Pan, Andrew Owens.
[Paper] [Code]
π€ Building Vision-Language Models on Solid Foundations with Masked Distillation. (SF-CLIP). [CVPR, 2024].
Sepehr Sameni, Kushal Kafle, Hao Tan, Simon Jenni.
[Paper]
βοΈ Probing Conceptual Understanding of Large Visual-Language Models. (UnderstandingVisualTextModels). [CVPRW, 2024].
Madeline Schiappa, Raiyaan Abdullah, Shehreen Azad, Jared Claypoole, Michael Cogswell, Ajay Divakaran, Yogesh Rawat.
[Paper] [Code]
ποΈ EVil-Probe - a Composite Benchmark for Extensive Visio-Linguistic Probing (Evil-Probe). [LREC, 2024].
Marie Bexte, Andrea Horbach, Torsten Zesch.
[Paper] [Code]
ποΈπ€ CounterCurate: Enhancing Physical and Semantic Visio-Linguistic Compositional Reasoning via Counterfactual Examples. (CounterCurate). [ACL Findings, 2024].
Jianrui Zhang, Mu Cai, Tengyang Xie, Yong Jae Lee.
[Paper] [Code]
π€ ContextBLIP: Doubly Contextual Alignment for Contrastive Image Retrieval from Linguistically Complex Descriptions. (ContextBLIP). [ACL Findings, 2024].
Honglin Lin, Siyu Li, Guoshun Nan, Chaoyue Tang, Xueting Wang, Jingxin Xu, Rong Yankai, Zhili Zhou, Yutong Gao, Qimei Cui, Xiaofeng Tao.
[Paper] [Code]
ποΈ Do Vision-Language Models Understand Compound Nouns? (Compun). [NAACL, 2024].
Sonal Kumar, Sreyan Ghosh, S Sakshi, Utkarsh Tyagi, Dinesh Manocha.
[Paper] [Code]
π€ ComCLIP: Training-Free Compositional Image and Text Matching. (ComCLIP). [NAACL, 2024].
Kenan Jiang, Xuehai He, Ruize Xu, Xin Eric Wang.
[Paper] [Code]
βοΈ Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers. (probing-resamplers). [NAACL, 2024].
Georgios Pantazopoulos, Alessandro Suglia, Oliver Lemon, Arash Eshghi.
[Paper] [Code]
βοΈ How and where does CLIP process negation?, [ALVR, 2024].
Vincent Quantmeyer, Pablo Mosteiro, Albert Gatt.
[Paper]
ποΈRainbow - A Benchmark for Systematic Testing of How Sensitive Visio-Linguistic Models are to Color Naming. (Rainbow). [EACL, 2024].
Marie Bexte, Andrea Horbach, Torsten Zesch.
[Paper] [Code]
π€ Fine-tuning CLIP Text Encoders with Two-step Paraphrasing. (ParaCLIP). [EACL Findings, 2024].
Hyunjae Kim, Seunghyun Yoon, Trung Bui, Handong Zhao, Quan Tran, Franck Dernoncourt, Jaewoo Kang.
[Paper] [Code]
π€ Diffusion Feedback Helps CLIP See Better. (DIVA). [arXiv, 2024].
Wenxuan Wang, Quan Sun, Fan Zhang, Yepeng Tang, Jing Liu, Xinlong Wang.
[Paper] [Code]
ποΈ SUGARCREPE++ Dataset: Vision-Language Model Sensitivity to Semantic and Lexical Alterations. (SUGARCREPE++). [arXiv, 2024].
Sri Harsha Dumpala, Aman Jaiswal, Chandramouli Sastry, Evangelos Milios, Sageev Oore, Hassan Sajjad.
[Paper] [Code]
ποΈ ColorFoil: Investigating Color Blindness in Large Vision and Language Models. (ColorFoil). [arXiv, 2024].
Ahnaf Mozib Samin, M. Firoz Ahmed, Md. Mushtaq Shahriyar Rafee.
[Paper] [Code]
ποΈ VISLA Benchmark: Evaluating Embedding Sensitivity to Semantic and Lexical Alterations. (VISLA). [arXiv, 2024].
Sri Harsha Dumpala, Aman Jaiswal, Chandramouli Sastry, Evangelos Milios, Sageev Oore, Hassan Sajjad.
[Paper] [Code]
ποΈπ€ Learn "No" to Say "Yes" Better: Improving Vision-Language Models via Negations. (CoN-CLIP). [arXiv, 2024].
Jaisidh Singh, Ishaan Shrivastava, Mayank Vatsa, Richa Singh, Aparna Bharati.
[Paper] [Code]
π€ Enhancing Conceptual Understanding in Multimodal Contrastive Learning through Hard Negative Samples. [arXiv, 2024].
Philipp J. RΓΆsch, Norbert Oswald, Michaela Geierhos, JindΕich LibovickΓ½.
[Paper]
π€ CLoVe: Encoding Compositional Language in Contrastive Vision-Language Models. (CLoVe). [arXiv, 2024].
Santiago Castro, Amir Ziai, Avneesh Saluja, Zhuoning Yuan, Rada Mihalcea.
[Paper] [Code]
π€ Prompting Large Vision-Language Models for Compositional Reasoning. (KeyComp). [arXiv, 2024].
Timothy Ossowski, Ming Jiang, Junjie Hu.
[Paper] [Code]
π€ FiGCLIP: Fine-Grained CLIP Adaptation via Densely Annotated Videos. (FiG-CLIP). [arXiv, 2024].
Darshan Singh S, Zeeshan Khan, Makarand Tapaswi.
[Paper] [Code]
π€ Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations. (Structure-CLIP). [AAAI, 2024].
Yufeng Huang, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao, Zhou Zhao, Tangjie Lv, Zhipeng Hu, Wen Zhang.
[Paper] [Code]
π€ Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining. (GNM-CLIP). [WACV, 2024].
Ugur Sahin, Hang Li, Qadeer Khan, Daniel Cremers, Volker Tresp.
[Paper] [Code]
ποΈ COCO-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs. (COCO-Counterfactuals). [NeurIPS D&B, 2023].
Tiep Le, Vasudev Lal, Phillip Howard.
[Paper] [Code] [HF Dataset]
ποΈ SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality. (SugarCrepe). [NeurIPS D&B, 2023].
Cheng-Yu Hsieh, Jieyu Zhang, Zixian Ma, Aniruddha Kembhavi, Ranjay Krishna.
[Paper] [Code]
ποΈ PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning. (PUG). [NeurIPS, 2023].
Florian Bordes, Shashank Shekhar, Mark Ibrahim, Diane Bouchacourt, Pascal Vincent, Ari S. Morcos.
[Paper] [Code]
ποΈ COLA: A Benchmark for Compositional Text-to-image Retrieval. (COLA). [NeurIPS, 2023].
Arijit Ray, Filip Radenovic, Abhimanyu Dubey, Bryan A. Plummer, Ranjay Krishna, Kate Saenko.
[Paper] [Code]
π€ Dense and Aligned Captions (DAC) Promote Compositional Reasoning in VL Models. (DAC). [NeurIPS, 2023].
Sivan Doveh, Assaf Arbelle, Sivan Harary, Roei Herzig, Donghyun Kim, Paola Cascante-bonilla, Amit Alfassy, Rameswar Panda, Raja Giryes, Rogerio Feris, Shimon Ullman, Leonid Karlinsky.
[Paper] [Code]
π€ Image Captioners Are Scalable Vision Learners Too. (CapPa). [NeurIPS, 2023].
Michael Tschannen, Manoj Kumar, Andreas Steiner, Xiaohua Zhai, Neil Houlsby, Lucas Beyer.
[Paper]
ποΈπ€ When and why vision-language models behave like bags-of-words, and what to do about it?. (ARO, NegCLIP). [ICLR, 2023].
Mert Yuksekgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, James Zou.
[Paper] [Code]
ποΈ What's "up" with vision-language models? Investigating their struggle with spatial reasoning. (WhatsUp). [EMNLP, 2023].
Amita Kamath, Jack Hessel, Kai-Wei Chang.
[Paper] [Code]
ποΈβοΈ Text encoders bottleneck compositionality in contrastive vision-language models. (ControlledLMCaps). [EMNLP, 2023].
Amita Kamath, Jack Hessel, Kai-Wei Chang.
[Paper] [Code]
ποΈ The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models. (BLA). [EMNLP, 2023].
Xinyi Chen, Raquel FernΓ‘ndez, Sandro Pezzelle.
[Paper] [Code]
ποΈ When are Lemons Purple? The Concept Association Bias of Vision-Language Models. (CAB). [EMNLP, 2023].
Yutaro Yamada, Yingtian Tang, Yoyo Zhang, Ilker Yildirim.
[Paper]
π€ Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality. (MosaiCLIP). [EMNLP, 2023].
Harman Singh, Pengchuan Zhang, Qifan Wang, Mengjiao Wang, Wenhan Xiong, Jingfei Du, Yu Chen.
[Paper]
π€ Incorporating Structured Representations into Pretrained Vision & Language Models Using Scene Graphs. (SGVL). [EMNLP, 2023].
Roei Herzig, Alon Mendelson, Leonid Karlinsky, Assaf Arbelle, Rogerio Feris, Trevor Darrell, Amir Globerson.
[Paper] [Code]
ποΈ Visual Spatial Reasoning. (VSR). [TACL, 2023].
Fangyu Liu, Guy Emerson, Nigel Collier.
[Paper] [Code]
ποΈπ€ Equivariant Similarity for Vision-Language Foundation Models. (EqBen). [ICCV, 2023].
Tan Wang, Kevin Lin, Linjie Li, Chung-Ching Lin, Zhengyuan Yang, Hanwang Zhang, Zicheng Liu, Lijuan Wang.
[Paper] [Code]
ποΈ Teaching CLIP to Count to Ten. (CountBench). [ICCV, 2023].
Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, Tali Dekel.
[Paper] [Code]
π€ Going Beyond Nouns With Vision & Language Models Using Synthetic Data. (SyViC). [ICCV, 2023].
Paola Cascante-Bonilla, Khaled Shehada, James Seale Smith, Sivan Doveh, Donghyun Kim, Rameswar Panda, GΓΌl Varol, Aude Oliva, Vicente Ordonez, Rogerio Feris, Leonid Karlinsky.
[Paper] [Code]
βοΈ Measuring Progress in Fine-grained Vision-and-Language Understanding. [ACL, 2023].
Emanuele Bugliarello, Laurent Sartran, Aishwarya Agrawal, Lisa Anne Hendricks, Aida Nematzadeh.
[Paper] [Code]
ποΈ CREPE: Can Vision-Language Foundation Models Reason Compositionally? (CREPE). [CVPR, 2023].
Zixian Ma, Jerry Hong, Mustafa Omer Gul, Mona Gandhi, Irena Gao, Ranjay Krishna.
[Paper] [Code]
π€ Teaching Structured Vision&Language Concepts to Vision&Language Models. (TSVLC). [CVPR, 2023].
Sivan Doveh, Assaf Arbelle, Sivan Harary, Rameswar Panda, Roei Herzig, Eli Schwartz, Donghyun Kim, Raja Giryes, Rogerio Feris, Shimon Ullman, Leonid Karlinsky.
[Paper] [Code]
ποΈ HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales. (HL Dataset). [INLG, 2023].
Michele Cafagna, Kees van Deemter, Albert Gatt.
[Paper] [Code]
ποΈ Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language Pretraining? (SNARE). [arXiv, 2023].
Fei Wang, Liang Ding, Jun Rao, Ye Liu, Li Shen, Changxing Ding.
[Paper] [Code]
ποΈ Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?. (Predicate-Noun-Dependencies). [EMNLP, 2022].
Mitja Nikolaus, Emmanuelle Salin, Stephane Ayache, Abdellah Fourtassi, Benoit Favre.
[Paper] [Code]
ποΈβοΈ Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality. [EMNLP, 2022].
Anuj Diwan, Layne Berry, Eunsol Choi, David Harwath, Kyle Mahowald.
[Paper] [Code]
ποΈ VIPHY: Probing "Visible" Physical Commonsense Knowledge. (ViPhy). [EMNLP Findings, 2022].
Shikhar Singh, Ehsan Qasemi, Muhao Chen.
[Paper] [Code]
ποΈ VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations. (VL-Checklist). [EMNLP Demo, 2022].
Tiancheng Zhao, Tianqi Zhang, Mingwei Zhu, Haozhan Shen, Kyusong Lee, Xiaopeng Lu, Jianwei Yin.
[Paper] [Code]
ποΈ Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality. (Winoground). [CVPR, 2022].
Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, Candace Ross.
[Paper] [HF Dataset]
ποΈ VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena. (VALSE). [ACL, 2022].
Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt.
[Paper] [Code]
ποΈ Image Retrieval from Contextual Descriptions. (ImageCoDe). [ACL, 2022].
Benno Krojer, Vaibhav Adlakha, Vibhav Vineet, Yash Goyal, Edoardo Ponti, Siva Reddy.
[Paper] [Code]
ποΈ Probing Image-Language Transformers for Verb Understanding. (SVO Probes). [ACL Findings, 2021].
Lisa Anne Hendricks, Aida Nematzadeh.
[Paper] [Code] [HF Dataset]
ποΈ Seeing past words: Testing the cross-modal capabilities of pretrained V&L models on counting tasks. (Counting-Probe). [MMSR, 2021].
Letitia Parcalabescu, Albert Gatt, Anette Frank, Iacer Calixto.
[Paper] [Code]
ποΈ FOIL it! Find One mismatch between Image and Language caption. (FOIL). [ACL, 2017].
Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurelie Herbelot, Moin Nabi, Enver Sangineto, Raffaella Bernardi.
[Paper] [Dataset]
ποΈ TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models. (TemporalBench). [arXiv, 2024].
Mu Cai, Reuben Tan, Jianrui Zhang, Bocheng Zou, Kai Zhang, Feng Yao, Fangrui Zhu, Jing Gu, Yiwu Zhong, Yuzhang Shang, Yao Dou, Jaden Park, Jianfeng Gao, Yong Jae Lee, Jianwei Yang.
[Paper] [Code] [HF Dataset]
ποΈ Vinoground: Scrutinizing LMMs over Dense Temporal Reasoning with Short Videos. (Vinoground). [arXiv, 2024].
Jianrui Zhang, Mu Cai, Yong Jae Lee.
[Paper] [Code] [HF Dataset]
ποΈπ€ VideoCon: Robust Video-Language Alignment via Contrast Captions. (VideoCon). [CVPR, 2024].
Hritik Bansal, Yonatan Bitton, Idan Szpektor, Kai-Wei Chang, Aditya Grover.
[Paper] [Code] [Project] [HF Dataset] [HF Model]
ποΈ NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality. (NAVERO). [arXiv, 2024].
Chaofan Tao, Gukyeong Kwon, Varad Gunjal, Hao Yang, Zhaowei Cai, Yonatan Dukler, Ashwin Swaminathan, R. Manmatha, Colin Jon Taylor, Stefano Soatto.
[Paper]
ποΈ ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty. (ConceptMix). [NeurIPS, 2024].
Xindi Wu, Dingli Yu, Yangsibo Huang, Olga Russakovsky, Sanjeev Arora.
[Paper] [Code]
If you find any errors or would like to add papers, please feel free to contribute by contacting me, posting an issue, or submitting a pull request. Please use the following Markdown format for the pull requests, including <br />
tag:
**Paper Title.** *(Optional Method/Benchmark name or abbreviation).* [Conference/Journal, Year]. <br />
*Authors.* <br />
[[Paper](link)] [[Code](link)] [[HF Dataset](link)]