This is the official PyTorch implementation of our paper:
Elevating Flow-Guided Video Inpainting with Reference Generation, AAAI 2025
Suhwan Cho, Seoung Wug Oh, Sangyoun Lee, Joon-Young Lee
Link: [arXiv]
You can also find other related papers at awesome-video-inpainting.
demo.mp4
Existing VI approaches face challenges due to the inherent ambiguity between known content propagation and new content generation. To address this, we propose a robust VI framework that integrates a large generative model to decouple this ambiguity. To further improve pixel distribution across frames, we introduce an advanced pixel propagation protocol named one-shot pulling. Furthermore, we present the HQVI benchmark, a dataset specifically designed to evaluate VI performance in diverse and realistic scenarios.