Skip to content
/ gtad Public template

The official implementation of G-TAD: Sub-Graph Localization for Temporal Action Detection

License

Notifications You must be signed in to change notification settings

frostinassiky/gtad

Repository files navigation

G-TAD

This repo holds the codes of paper: "G-TAD: Sub-Graph Localization for Temporal Action Detection", accepted in CVPR 2020.

G-TAD Overview

Update

30 Mar 2020: THUMOS14 feature is available! Gooogle Drive Link.

15 Apr 2020: THUMOS14 code is published! I update the post processing code so the experimental result is slightly better than the orignal paper!

Overview

Temporal action detection is a fundamental yet challenging task in video understanding. Video context is a critical cue to effectively detect actions, but current works mainly focus on temporal context, while neglecting semantic context as well as other important context properties. In this work, we propose a graph convolutional network (GCN) model to adaptively incorporate multi-level semantic context into video features and cast temporal action detection as a sub-graph localization problem. Specifically, we formulate video snippets as graph nodes, snippet-snippet correlations as edges, and actions associated with context as target sub-graphs. With graph convolution as the basic operation, we design a GCN block called GCNeXt, which learns the features of each node by aggregating its context and dynamically updates the edges in the graph. To localize each sub-graph, we also design a SGAlign layer to embed each sub-graph into the Euclidean space. Extensive experiments show that G-TAD is capable of finding effective video context without extra supervision and achieves state-of-the-art performance on two detection benchmarks. On ActityNet-1.3, we obtain an average mAP of 34.09%; on THUMOS14, we obtain 40.16% in mAP@0.5, beating all the other one-stage methods.

Detail, Video, Arxiv.

Dependencies

  • Python == 3.7
  • Pytorch==1.1.0 or 1.3.0
  • CUDA==10.0.130
  • CUDNN==7.5.1_0

Installation

Based on the idea of ROI Alignment from Mask-RCNN, we devoloped SGAlign layer in our implementation. You have to compile a short cuda code to run Algorithm 1 in our paper.

  1. Create conda environment
    conda env create -f env.yml
  2. Install Align1D2.2.0
    cd gtad_lib
    python setup.py install
  3. Test Align1D2.2.0
    python align.py

Code Architecture

gtad                        # this repo
├── data                    # feature and label
├── evaluation              # evaluation code from offical API
├── gtad_lib                # gtad library
└── ...

Train and evaluation

After downloading the dataset and setting up the envirionment, you can start from the following script.

python gtad_train.py
python gtad_inference.py 
python gtad_postprocessing.py --mode detect

or

bash gtad_thumos.sh | tee log.txt

Bibtex

Arxiv Version.

@misc{xu2019gtad,
    title={G-TAD: Sub-Graph Localization for Temporal Action Detection},
    author={Mengmeng Xu and Chen Zhao and David S. Rojas and Ali Thabet and Bernard Ghanem},
    year={2019},
    eprint={1911.11462},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Reference

Those are very helpful and promising implementations for the temporal action localization task. My implementations borrow ideas from them.

  • BSN: Boundary Sensitive Network for Temporal Action Proposal Generation. Paper Code

  • BMN: BMN: Boundary-Matching Network for Temporal Action Proposal Generation. Paper Code - PaddlePaddle Code PyTorch

  • Graph Convolutional Networks for Temporal Action Localization. Paper Code

Contact

mengmeng.xu[at]kaust.edu.sa

About

The official implementation of G-TAD: Sub-Graph Localization for Temporal Action Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published