diff --git a/README.md b/README.md index 8f1531f..c7e06e6 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,12 @@ ## Introduction This is an implementation repository for our work in EMNLP 2021. -**Relation-aware Video Reading Comprehension for Temporal Language Grounding**. +**Relation-aware Video Reading Comprehension for Temporal Language Grounding**. [arxiv paper](https://arxiv.org/abs/2110.05717) ![](https://github.com/Huntersxsx/RaNet/blob/master/img/framework.png) ## Note: -Our pre-trained models are available at [SJTU jbox](https://jbox.sjtu.edu.cn/l/215Z2T) or [baiduyun, passcode:xmc0](https://pan.baidu.com/s/1CRojAlDURJ57tUprdNbfFg) or [Google Drive](https://drive.google.com/drive/folders/1AFdgfxFCA9ji36HaveL2dQ7wr7OjlHjb?usp=sharing). We will release our code upon the release of our paper. +Our pre-trained models are available at [SJTU jbox](https://jbox.sjtu.edu.cn/l/215Z2T) or [baiduyun, passcode:xmc0](https://pan.baidu.com/s/1CRojAlDURJ57tUprdNbfFg) or [Google Drive](https://drive.google.com/drive/folders/1AFdgfxFCA9ji36HaveL2dQ7wr7OjlHjb?usp=sharing). We will release our code soon. @@ -124,6 +124,8 @@ Use the following commands for testing: We greatly appreciate the [2D-Tan repository](https://github.com/microsoft/2D-TAN), [gtad repository](https://github.com/frostinassiky/gtad) and [CCNet repository](https://github.com/speedinghzl/CCNet). Please remember to cite the papers: ``` + + @InProceedings{2DTAN_2020_AAAI, author = {Zhang, Songyang and Peng, Houwen and Fu, Jianlong and Luo, Jiebo}, title = {Learning 2D Temporal Adjacent Networks forMoment Localization with Natural Language},