Code for 'Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization' (CVPR2019).
Paper and Supplementary.
- Python 3.5
- Cuda 9.0
- PyTorch 0.4
- Install dependencies:
pip3 install -r requirements.txt
. - Install Matlab API for Python (matlab.engine).
- Prepare THUMOS14 and ActivityNet datasets.
We employ UntrimmedNet or I3D features in the paper.
We recommend re-extracting the features yourself using these two repos:
Or use the features pre-extracted by us (Warning: Not easy to download):
- Download the features:
- THUMOS14 Features (Original video fps is kept)
ActivityNet Features (Input videos are 25fps)
- Join the zip files by
zip --fix {} --out {}
and unzip the files. - Put the extracted folder into the parent folder of this repo. (Or change the paths in the config file.)
Other features can also be used.
Static clip masks are used for hard negative mining. They are included in the download features.
If you want to generate the masks by yourself, please refer to tools/get_flow_intensity_anet.py
.
URL links of some videos in this dataset are no longer valid. Check the availability and generate this file: anet_missing_videos.npy.
- Train models with weak supervision (Skip this if you use our trained model):
python3 train.py --config-file {} --train-subset-name {} --test-subset-name {} --test-log
- Test and save the class activation sequences (CAS):
python3 test.py --config-file {} --train-subset-name {} --test-subset-name {} --no-include-train
- Action localization using the CAS:
python3 detect.py --config-file {} --train-subset-name {} --test-subset-name {} --no-include-train
For THUMOS14, predictions are saved in output/predictions
and final performances are saved in a npz file in output
.
For ActivityNet, predictions are saved in output/predictions
and final performances can be obtained via the dataset evaluation API.
Our method is evaluated on THUMOS14 and ActivityNet with I3D or UNT features. Experiment settings and their auguments are listed as following.
config-file | train-subset-name | test-subset-name | |
---|---|---|---|
1 | configs/thumos-UNT.json | val | test |
2 | configs/thumos-I3D.json | val | test |
3 | configs/anet12-local-UNT.json | train | val |
4 | configs/anet12-local-I3D.json | train | val |
5 | configs/anet13-local-I3D.json | train | val |
6 | configs/anet13-server-I3D.json | train | test |
Our trained models are provided in this folder. To use these trained models, run test.py
and detect.py
with the config files in this folder.
@InProceedings{Liu_2019_CVPR, author = {Liu, Daochang and Jiang, Tingting and Wang, Yizhou}, title = {Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} }
MIT