You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _pages/publications.md
+10-11Lines changed: 10 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,25 +10,24 @@ My current research topics lie in deep learning applications in natural language
10
10
You can also check my [Google Scholar](https://scholar.google.com/citations?user=WPMcd_sAAAAJ&hl=en) profile. * denotes equal contribution.
11
11
12
12
## Journal
13
-
<b>Ryota Tanaka</b>, Akihide Ozeki, Shugo Kato, Akinobu Lee, <b>Context and Knowledge Aware Dialogue System and System Combination for Grounded Response Generation</b>, Computer Speech & Language Journal, vol. 62, July 2020[[Link]](http://www.sciencedirect.com/science/article/pii/S0885230820300036)
13
+
<b>Ryota Tanaka</b>, Akihide Ozeki, Shugo Kato, Akinobu Lee, [Context and Knowledge Aware Dialogue System and System Combination for Grounded Response Generation](http://www.sciencedirect.com/science/article/pii/S0885230820300036), Computer Speech & Language Journal, vol. 62, July 2020
14
14
15
15
## International Conference (Refereed)
16
-
<b>Ryota Tanaka</b>, Taichi Iki, Kyosuke Nishida, Jun Suzuki, <b>InstructDoc: A Dataset for Zero-shot Generalization of Visual Document
17
-
Understanding with Instructions</b>, Proceedings of the 38th AAAI Conference on Artificial Intelligence (<b>AAAI2024</b>) (acceptance rate xxx/xxx = xxx%)
16
+
<b>Ryota Tanaka</b>, Taichi Iki, Kyosuke Nishida, Jun Suzuki, [InstructDoc: A Dataset for Zero-shot Generalization of Visual Document
17
+
Understanding with Instructions](https://rtanaka-lab.github.io/publications/), Proceedings of the 38th AAAI Conference on Artificial Intelligence (<b>AAAI2024</b>) (acceptance rate xxx/xxx = xxx%)
18
18
19
-
<b>Ryota Tanaka</b>, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itusmi Saito, Kuniko Saito, <b>SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images</b>, Proceedings of the 37th AAAI Conference on Artificial Intelligence (<b>AAAI2023</b>), Oral, (acceptance rate 1721/8777 = 19.6%) [[arXiv]](https://arxiv.org/abs/2301.04883)[[data]](https://github.com/nttmdlab-nlp/SlideVQA)
19
+
<b>Ryota Tanaka</b>, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itusmi Saito, Kuniko Saito, [SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images](https://arxiv.org/abs/2301.04883), Proceedings of the 37th AAAI Conference on Artificial Intelligence (<b>AAAI2023</b>), Oral, (acceptance rate 1721/8777 = 19.6%) [[data]](https://github.com/nttmdlab-nlp/SlideVQA)
20
20
21
-
<b>Ryota Tanaka</b>\*, Kyosuke Nishida\*, Sen Yoshida, <b>VisualMRC: Machine Reading Comprehension on Document Images</b>, Proceedings of the 35th AAAI Conference on Artificial Intelligence (<b>AAAI2021</b>), (acceptance rate 1388/6993 = 19.8%)[[arXiv]](https://arxiv.org/abs/2101.11272)[[data]](https://github.com/nttmdlab-nlp/VisualMRC)
21
+
<b>Ryota Tanaka</b>\*, Kyosuke Nishida\*, Sen Yoshida, [VisualMRC: Machine Reading Comprehension on Document Images](https://arxiv.org/abs/2101.11272), Proceedings of the 35th AAAI Conference on Artificial Intelligence (<b>AAAI2021</b>), (acceptance rate 1388/6993 = 19.8%)[[data]](https://github.com/nttmdlab-nlp/VisualMRC)
0 commit comments