This repository is the official PyTorch implementation of our paper GMMFormer v2: An Uncertainty-aware Framework for Partially Relevant Video Retrieval.
1. Clone this repository:
git clone https://github.com/huangmozhi9527/GMMFormer_v2.git
cd GMMFormer_v2
2. Create a conda environment and install the dependencies:
conda create -n prvr python=3.9
conda activate prvr
conda install pytorch==1.9.0 cudatoolkit=11.3 -c pytorch -c conda-forge
pip install -r requirements.txt
3. Download Datasets: All features of TVR, ActivityNet Captions and Charades-STA are kindly provided by the authors of MS-SL.
4. Set root and data_root in config files (e.g., ./Configs/tvr.py).
To train GMMFormer_v2 on TVR:
cd src
python main.py -d tvr --gpu 0
To train GMMFormer_v2 on ActivityNet Captions:
cd src
python main.py -d act --gpu 0
To train GMMFormer_v2 on Charades-STA:
cd src
python main.py -d cha --gpu 0
We provide trained GMMFormer_v2 checkpoints. You can download them from Baiduyun disk.
Dataset | ckpt |
---|---|
TVR | Baidu disk |
ActivityNet Captions | Baidu disk |
Charades-STA | Baidu disk |
For this repository, the expected performance is:
Dataset | R@1 | R@5 | R@10 | R@100 | SumR |
---|---|---|---|---|---|
TVR | 16.2 | 37.6 | 48.8 | 86.4 | 189.1 |
ActivityNet Captions | 8.9 | 27.1 | 40.2 | 78.7 | 154.9 |
Charades-STA | 2.5 | 8.6 | 13.9 | 53.2 | 78.2 |