-
Peking University & Peng Cheng Laboratory
- Shenzhen, China
-
CARE Public
(TIP'2023) Concept-Aware Video Captioning: Describing Videos with Effective Prior Information
-
CLFM Public
(AAAI'2024) Embracing Language Inclusivity and Diversity in CLIP Through Continual Language Learning
-
ZeroNLG Public
(TPAMI'2024) ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation
-
MultiCapCLIP Public
(ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning
-
MLLM-MRG Public
Customizing General-Purpose Foundation Models for Medical Report Generation
-
ZeroCap Public
Forked from YoadTew/zero-shot-image-to-textImplementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic
Python UpdatedNov 16, 2023 -
The PyTorch code of the AAAI2021 paper "Non-Autoregressive Coarse-to-Fine Video Captioning".
-
CLIP-Captioner Public
(PRCV'2022) CLIP Meets Video Captioning: Concept-Aware Representation Learning Does Matter
-
-
-
-
-
CLIP Public
Forked from openai/CLIPContrastive Language-Image Pretraining
Jupyter Notebook MIT License UpdatedSep 12, 2021 -
standard-readme Public
Forked from RichardLitt/standard-readmeA standard style for README files