A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
-
Updated
Apr 7, 2025 - Python
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VL
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
Oscar and VinVL
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.
Visual Question Answering in Pytorch
[CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks.
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)
PyTorch implementation for the Neuro-Symbolic Concept Learner (NS-CL).
A lightweight, scalable, and general framework for visual question answering research
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Strong baseline for visual question answering
OmniFusion — a multimodal model to communicate using text and images
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)
[ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Models
Add a description, image, and links to the vqa topic page so that developers can more easily learn about it.
To associate your repository with the vqa topic, visit your repo's landing page and select "manage topics."