I'm Manjin Kim, a Ph.D student in Pohang University of Science and Technology (POSTECH), South Korea. I am a member of the computer vision lab at POSTECH, under supervision of Professor Minsu Cho. My research interest is in learning video representation and its applications.
- Video representation learning
- Motion feature learning
- Multi-modal learning
- Learning correlation structures for vision transformers [project page] [code]
Manjin Kim, Paul Hongsuck Seo*, Cordelia Schmid, and Minsu Cho* (* corresponding authors), CVPR 2024. - Future transformer for long-term action anticipation [project page] [code]
Dayoung Gong, Joonseok Lee, Manjin Kim, Seongjong Ha, and Minsu Cho, CVPR 2022. - Relational self-attention: what's missing in attention for video understanding [project page] [code]
Manjin Kim*, Heeseung Kwon*, Chunyu Wang, Suha Kwak, and Minsu Cho (* equal contribution), NeurIPS 2021. - Learning self-similarity in space and time as as generalized motion for video action recognition [project page] [code]
Heeseung Kwon*, Manjin Kim*, Suha Kwak, and Minsu Cho (* equal contribution), ICCV 2021. - MotionSqueeze: neural motion feature learning for video understanding [project page] [code]
Heeseung Kwon, Manjin Kim, Suha Kwak, and Minsu Cho, ECCV 2020.
- Student Researcher, Google Research, France (Jul. 2022 - Jan. 2023)
- Developed a multimodal long-form video captioning system.
- Host: Paul Hongsuck Seo
- Research Intern, Microsoft Research Asia (MSRA), remote (Dec. 2020 - June. 2021)
- Developed a dynamic neural feature transform method, called Relational Self-Attention.
- Mentor: Chunyu Wang
- Research Intern, LG CNS, Korea (Jun. 2018 - Aug. 2018)
- Developed a video data augmentation system using CycleGAN.
- CV: CV
- e-mail: mandos@postech.ac.kr
- google scholar: https://scholar.google.com/citations?user=kqddtlwAAAAJ&hl=en&oi=ao